<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yuval Hazaz</title>
    <description>The latest articles on DEV Community by Yuval Hazaz (@yuvalhazaz).</description>
    <link>https://dev.to/yuvalhazaz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yuvalhazaz"/>
    <language>en</language>
    <item>
      <title>Modernizing Legacy Systems with Amplication's DB Schema Import</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Sun, 20 Aug 2023 14:53:59 +0000</pubDate>
      <link>https://dev.to/yuvalhazaz/modernizing-legacy-systems-with-amplications-db-schema-import-221g</link>
      <guid>https://dev.to/yuvalhazaz/modernizing-legacy-systems-with-amplications-db-schema-import-221g</guid>
      <description>&lt;p&gt;Modernizing legacy systems isn’t just about catching up with the latest and greatest tech; it’s about staying competitive. Yet, transitioning away from legacy systems can be daunting, especially with the risk of losing valuable data or facing compatibility issues. The effort involved in setting up numerous data models, crafting Data Transfer Objects (DTOs), and coding boilerplate for hundreds of existing tables, not to mention managing intricate relations, can be massive and time-consuming. This is where our latest feature - &lt;strong&gt;DB Schema Import&lt;/strong&gt; - steps in, offering an efficient and seamless way to bring your old infrastructure into the modern era without getting lost in the complexities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why modernize with Amplication?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Rapid Development
&lt;/h3&gt;

&lt;p&gt;Launch your projects faster. Amplication’s auto-generation tools drastically cut down the development process by handling boilerplate and infrastructure code, saving you months that you'd otherwise spend crafting code for your existing data models.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consistency
&lt;/h3&gt;

&lt;p&gt;No more patchwork solutions. Amplication ensures your backend services are consistent, scalable, and up to industry standards.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stay in Control
&lt;/h3&gt;

&lt;p&gt;While Amplication streamlines the development process, you retain full ownership and control of your code. It's automation with autonomy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Savings
&lt;/h3&gt;

&lt;p&gt;By reducing manual work, the risk of errors and associated costs decrease.&lt;/p&gt;

&lt;h3&gt;
  
  
  Efficient Data Integration with the DB Schema Import
&lt;/h3&gt;

&lt;p&gt;Transitioning away from legacy systems doesn’t mean starting from scratch. With the DB Schema Import feature, you can effortlessly bring in your existing database schema, preserving the value of your legacy data and laying the foundation for new, modern applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started with DB Schema Import
&lt;/h2&gt;

&lt;p&gt;Ready to upgrade your legacy system? Here's a quick guide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Generate your Prisma schema&lt;/strong&gt;: Use &lt;a href="https://www.prisma.io/docs/concepts/components/introspection"&gt;Prisma's introspection&lt;/a&gt; to scan your legacy database and produce a &lt;strong&gt;&lt;code&gt;schema.prisma&lt;/code&gt;&lt;/strong&gt; file.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Upload to Amplication&lt;/strong&gt;: Once your schema is ready, bring it into Amplication, which will automatically set up your entities, fields, and relations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pick your Database&lt;/strong&gt;: Select either MySQL or PostgreSQL by installing the appropriate DB plugin.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build &amp;amp; Commit&lt;/strong&gt;: Generate and commit the code for your project via Amplication.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Check out our &lt;a href="https://docs.amplication.com/how-to/import-prisma-schema/"&gt;step-by-step guide&lt;/a&gt; for a deeper dive&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A Real-Life Example
&lt;/h2&gt;

&lt;p&gt;Let's dive into an example to demonstrate just how effortless it is to create a fully-functional service using this feature. In this walkthrough, we'll leverage a Prisma Schema file to automate the setup of entities, fields, and relations, laying the groundwork for a fully functioning application to manage events.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a new ‘Event Management’ service with Amplication

&lt;ul&gt;
&lt;li&gt;Make sure to select MySQL or PostgreSQL database during the service creation wizard&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mBIT8gG_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mBIT8gG_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/0.png" alt="" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;After the service is created, click the “Create entities for my service” option&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MgrXmqhT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MgrXmqhT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/1.png" alt="" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;Within the entities page, click the “Upload Prisma Schema” button&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_E3rSz_Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_E3rSz_Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/2.png" alt="" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;Upload this sample &lt;a href="https://raw.githubusercontent.com/amplication/blog-sample-projects/main/db-import-examples/event-management.prisma"&gt;Prisma schema file&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nIu5t1KH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nIu5t1KH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/3.png" alt="" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;datasource db {
  provider = "postgresql"
  url      = env("DB_URL")
}

generator client {
  provider = "prisma-client-js"
}

model Event {
    id          String      @id @default(uuid())
    name        String
    description String
    startDate   DateTime
    endDate     DateTime
    location    String
    attendees   Attendee[]
    sessions    Session[]
}

model Attendee {
    id          String      @id @default(uuid())
    name        String
    email       String     @unique
    eventId     String
    tikets      Ticket[]
    event       Event       @relation(fields: [eventId], references: [id])
}

model Ticket {
    id          String      @id @default(uuid())
    attendeeId  String
    ticketType  TicketType
    attendee    Attendee    @relation(fields: [attendeeId], references: [id])
}

model Session {
    id          String      @id @default(uuid())
    name        String
    speaker     String
    time        DateTime
    eventId     String
    event       Event       @relation(fields: [eventId], references: [id])
}

enum TicketType {
    FREE
    PAID
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Watch the log as all the entities are imported.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KEU-ACaH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KEU-ACaH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/4.png" alt="" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;Once the import is completed, click the entities tab to view all the newly created entities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--53fw25sc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--53fw25sc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/5.png" alt="" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;Switch to the ERD view to see a full picture of all the created entities with their fields and relations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AgpNt41x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AgpNt41x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/modernizing-legacy-systems-with-amplications-db-schema-import/6.png" alt="" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Make any additional configurations or changes, and when you are ready, build and commit the code to your Git repo.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the newly created PR to view the generated code for your Event Management service.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;💡 Check out this &lt;a href="https://github.com/amplication/db-schema-import-example"&gt;Git repository&lt;/a&gt; showcasing the outcome of generating code with Amplication based on the example Prisma schema file&lt;/p&gt;

&lt;h2&gt;
  
  
  Gratitude to Our Beta Testers
&lt;/h2&gt;

&lt;p&gt;A big thank you to the beta testers of this feature. Your invaluable feedback was instrumental in refining the DB Schema Import capability, making it more robust and developer-friendly.&lt;/p&gt;

&lt;h2&gt;
  
  
  In Conclusion
&lt;/h2&gt;

&lt;p&gt;The challenge of updating old systems is significant, especially without losing data or running into issues. However, with Amplication's Database Schema Import, the process is simplified. This tool makes it easier to move from old systems to new ones without the typical headaches. Amplication offers an easy yet sophisticated solution to complex and time-consuming challenges as you look to modernize.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Amplication
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.amplication.com"&gt;Amplication&lt;/a&gt; helps increase velocity and consistency for backend teams. It accelerates the development of microservices and backend applications using auto-generation of all the boilerplate and infrastructure code and allows developers to focus their efforts on developing the core business processes and logic.&lt;br&gt;
Using an extensive plugin system, developers and platform teams can automate the generation of services while keeping all the best practices and know-how of the organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it now
&lt;/h2&gt;

&lt;p&gt;You can start using Amplication by simply visiting &lt;a href="https://app.amplication.com/"&gt;https://app.amplication.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>backend</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Role of Queues in Building Efficient Distributed Applications</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Thu, 10 Aug 2023 08:33:05 +0000</pubDate>
      <link>https://dev.to/amplication/the-role-of-queues-in-building-efficient-distributed-applications-1i6e</link>
      <guid>https://dev.to/amplication/the-role-of-queues-in-building-efficient-distributed-applications-1i6e</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;When building mission-critical applications that manage large workloads, you must ensure our systems can handle the workloads. In some use cases, we can handle the demand with autoscaling policies on the infrastructure so that our infrastructure can scale up or down based on the demand.&lt;/p&gt;

&lt;p&gt;However, there are certain use cases where this just isn't enough. For example, consider the following use case:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; A user wishes to manage a content portfolio of all the articles a person has written.&lt;/li&gt;
&lt;li&gt; This user then inputs a set of URLs (100 URLs) onto the system.&lt;/li&gt;
&lt;li&gt; The system then visits these URLs and scrapes certain information needed to build the portfolio. This can include the: Article Name, Published Date, Outline, Reading Time, and Banner Image.&lt;/li&gt;
&lt;li&gt; The system then stores it in an internal database.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A simple flowchart of this process is depicted below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-role-of-queues-in-building-efficient-distributed-applications%2F0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-role-of-queues-in-building-efficient-distributed-applications%2F0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Figure: A flowchart of the content portfolio workflow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On its own, this process seems quite simple. But this can cause a lot of issues on a large scale:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The system cannot simultaneously handle thousands or millions of requests when each request may include hundreds of URLs.&lt;/li&gt;
&lt;li&gt; We cannot predict if a URL is working or broken. Therefore, there's a high chance of system errors which may break the entire process and makes it very hard to manage retries.&lt;/li&gt;
&lt;li&gt; The URL we visit may have high latency, slowing the overall process.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach will not scale and is not resilient enough to handle high demand.&lt;/p&gt;

&lt;h1&gt;
  
  
  How do queues work and help the system scale?
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;This is where messaging queues come into the picture.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A queue is a linear data structure that can process requests on a first-in-first-out basis. Queues play a huge role when we are working in distributed systems. Queues enable asynchronous communication within a distributed system. For example, consider the scenario shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-role-of-queues-in-building-efficient-distributed-applications%2F1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-role-of-queues-in-building-efficient-distributed-applications%2F1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Figure: A synchronous workflow of the web scraper&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The system will implement a linear-scaling model if we build the portfolio import scenario we discussed before using a synchronous process. This means that if &lt;code&gt;URLInputLambda&lt;/code&gt; processes 100 invocations at the same time, the &lt;code&gt;ScraperLambda&lt;/code&gt; will then be invoked synchronously to scrape the data, one after the other.&lt;/p&gt;

&lt;p&gt;We wrongly assume that all requests will take the same time and succeed. But, when running into an error for a single URL, handling retries for that URL will be tough.&lt;/p&gt;

&lt;p&gt;To resolve that, we should distribute the workflow and use a message queue to handle each task separately and asynchronously. It will look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-role-of-queues-in-building-efficient-distributed-applications%2F2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fthe-role-of-queues-in-building-efficient-distributed-applications%2F2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Figure: Refactoring the architecture with a messaging queue&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As shown above, we are using &lt;a href="https://kafka.apache.org/" rel="noopener noreferrer"&gt;Apache Kafka&lt;/a&gt; as the messaging queue. Each URL will be pushed into the queue as a single event from the &lt;code&gt;URLInputLambda.&lt;/code&gt; Next, the &lt;code&gt;ScraperLambda&lt;/code&gt;get each event from the queue to be processed.&lt;/p&gt;

&lt;p&gt;By bringing this architectural change, we immediately gain these benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The system is now responsive and highly scalable: Once the messages are fed into the queue, the &lt;code&gt;URLInputLambda&lt;/code&gt; can complete the request processing and return a &lt;strong&gt;200&lt;/strong&gt; status code to the user indicating that the messages have been accepted into the system. The queue can then take its own time and process the messages asynchronously. This means the forward-facing services can handle millions of requests without issues as the workflow is done asynchronously.&lt;/li&gt;
&lt;li&gt; The system is now decoupled: By bringing the messaging queue, we have decoupled the system services. The two Lambda functions don't scale linearly, as the queue can manage the flow of requests. If needed, we can use more instances of the processor while keeping using minimal instances of other services.&lt;/li&gt;
&lt;li&gt; The system is now resilient with improved fault tolerance: We can handle errors of broken URLs and set up a Dead Letter Queue, and push messages to the dead letter queue after X number of retries. This ensures no data is lost, and one failure does not disrupt the entire system, improving resiliency.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  What are the different queueing strategies that are available for developers?
&lt;/h1&gt;

&lt;p&gt;Well, now that we've got a rough idea of how powerful a queue is in a distributed system, it's essential to understand the different types of queues that we can use in our system:&lt;/p&gt;

&lt;h2&gt;
  
  
  Messaging Queues
&lt;/h2&gt;

&lt;p&gt;We may use these standard queues for simple and linear use cases. Its responsibility is to store messages in a queue-based structure and let components send and receive messages from it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Queues
&lt;/h2&gt;

&lt;p&gt;A task queue is a specialized form of message queue that explicitly handles task distribution and processing. Consumers pick up tasks from the queue and execute them independently. This lets us integrate proper flow control, ensure our system scales non-linearly, and is often powered by event sourcing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Publisher-Subscriber
&lt;/h2&gt;

&lt;p&gt;This type of queueing strategy is sometimes treated as a building block of an event-driven architecture. It consists of a central publisher that pushes data onto subscribers who subscribe to it through an event-streaming-based approach. To learn more about using Pub/Sub models in microservices, look into this &lt;a href="https://amplication.com/blog/using-pub-sub-messaging-for-microservice-communication" rel="noopener noreferrer"&gt;in-depth article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The Pub/Sub strategy relies on event-driven architectures where publishers publish events for subscribers to process. One of the pitfalls of this approach is that it operates on a fire-and-forget basis, meaning that if a subscriber fails to process an event, that event gets lost forever. However, we can build a fault tolerance system around the subscriber to handle such errors.&lt;/p&gt;

&lt;h1&gt;
  
  
  When should we use queues in distributed applications?
&lt;/h1&gt;

&lt;p&gt;This is one of our most important questions when working with a queue. "When should we use it?".&lt;/p&gt;

&lt;p&gt;We should typically use a queue to accomplish some flow control or asynchronous processing in our application while enhancing application scalability, reliability, and resiliency.&lt;/p&gt;

&lt;p&gt;Below are some best practices we should consider when using a queue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Asynchronous processing
&lt;/h2&gt;

&lt;p&gt;Queues let parts of our application perform operations asynchronously. For example, as discussed before, if you were to pass 100 URLs at a given time, and 1000 users performed this operation simultaneously, the system resources would be exhausted in one go as all of your time-consuming requests are immediately processed.&lt;/p&gt;

&lt;p&gt;However, by introducing a queue, we can acknowledge back to the user with an "OK" message, while the system can take its time processing each request in an asynchronous nature without linearly scaling its resources. This lets you adopt architectural patterns such as "&lt;a href="https://learn.microsoft.com/en-us/azure/architecture/patterns/queue-based-load-leveling" rel="noopener noreferrer"&gt;Queue Based Load Leveling&lt;/a&gt;".&lt;/p&gt;

&lt;h2&gt;
  
  
  Event-based communications
&lt;/h2&gt;

&lt;p&gt;Queues are sometimes treated as event hubs as components can push data onto a queue, and other services can subscribe to it, poll for data, and begin processing. This lets us decouple parts of the application and enforce isolated, independent processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decoupling microservices
&lt;/h2&gt;

&lt;p&gt;Queues are used in microservices to decouple different services from each other. So, services can communicate through queues rather than calling each service directly.&lt;/p&gt;

&lt;p&gt;This ensures better scalability of each service and reduces coupling between each service, thus, making it easier to modify or replace individual services without affecting others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Improving fault tolerance and retries
&lt;/h2&gt;

&lt;p&gt;Another reason to use a message queue is to ensure that messages are not lost. By default, queues can retry, or store failed messages through Dead Letter Queues. You can configure the Dead Letter Queue to let your system automatically push a failed message onto a special queue after X number of failed retries.&lt;/p&gt;

&lt;h2&gt;
  
  
  Delayed processing
&lt;/h2&gt;

&lt;p&gt;Certain queueing services like Amazon SQS support delayed processing. This lets your queue send messages to its consumers by enforcing a delay of a set number of seconds. This is beneficial when you want your consumers to cope-up with the demand.&lt;/p&gt;

&lt;h1&gt;
  
  
  What are the common pitfalls and architectural challenges of queues?
&lt;/h1&gt;

&lt;p&gt;But, messaging queues are not always the right soltuion for our application. It has its own set of pitfalls and challenges:&lt;/p&gt;

&lt;h2&gt;
  
  
  Fault Tolerance and Retries
&lt;/h2&gt;

&lt;p&gt;Though we said fault tolerance is offered in queues, using it correctly can often be challenging. Most of the time, we will have to create a separate queue that will act as a dead letter queue, and we will have to configure it to work as we wish manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  Idempotent Actions
&lt;/h2&gt;

&lt;p&gt;Another challenge with queues is ensuring that a message is only processed once. By default, most queues use at-least-once delivery, with a potential for some messages to be handled more than once. So, it's essential to implement message deduplication using a deduplication ID that lets the queue prevent the message with the same deduplication ID from being pushed to a consumer over a certain period.&lt;/p&gt;

&lt;h1&gt;
  
  
  What are some queueing services that we can use?
&lt;/h1&gt;

&lt;p&gt;Below are some popular queue services and technologies we can use&lt;/p&gt;

&lt;h2&gt;
  
  
  1. RabbitMQ
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.rabbitmq.com/" rel="noopener noreferrer"&gt;RabbitMQ&lt;/a&gt; is a robust and highly configurable open-source message broker that implements the Advanced Message Queuing Protocol (AMQP).&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Supports various messaging patterns, including point-to-point, publish-subscribe, request-reply, and more.&lt;/li&gt;
&lt;li&gt;  Rich feature set with advanced routing, message acknowledgment, message durability, and priority queues.&lt;/li&gt;
&lt;li&gt;  High reliability and fault tolerance with support for clustering and message replication.&lt;/li&gt;
&lt;li&gt;  Integrates well with multiple programming languages and platforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Slightly complex setup and configuration compared to other queuing systems.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Apache Kafka:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://kafka.apache.org/" rel="noopener noreferrer"&gt;Apache Kafka&lt;/a&gt; is a distributed event streaming platform designed for high-throughput, real-time data processing and event-driven architectures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Scalable and high throughput with support for parallel processing and partitioning.&lt;/li&gt;
&lt;li&gt;  Provides strong durability and fault tolerance through data replication across brokers.&lt;/li&gt;
&lt;li&gt;  Real-time data streaming and processing capabilities.&lt;/li&gt;
&lt;li&gt;  Supports event replay, allowing consumers to go back and reprocess past events.&lt;/li&gt;
&lt;li&gt;  Efficiently handles large-scale data streams and has low-latency characteristics.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Slightly more complex to set up and manage compared to traditional message queues.&lt;/li&gt;
&lt;li&gt;  It may not be the best fit for point-to-point messaging or request-reply patterns.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Amazon SQS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/sqs/" rel="noopener noreferrer"&gt;Amazon SQS&lt;/a&gt; is a fully managed message queue service that provides a reliable and scalable solution for asynchronous messaging between distributed components and microservices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Fully managed service with high availability, durability, and automatic scaling.&lt;/li&gt;
&lt;li&gt;  Supports two types of queues: Standard Queue for high throughput and FIFO Queue for ordered, exactly-once processing.&lt;/li&gt;
&lt;li&gt;  Super easy deduplication rules configuration for FIFO queues.&lt;/li&gt;
&lt;li&gt;  Support for dead-letter queues.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Messages are limited to 256KB in size; anything bigger will require implementing a solution that leverages S3 buckets.&lt;/li&gt;
&lt;li&gt;  Limitations on the number of in-flight messages and FIFO throughput.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Google Pub/Sub:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/pubsub/docs/overview" rel="noopener noreferrer"&gt;Google Pub/Sub&lt;/a&gt; is a fully managed, highly scalable messaging service that enables real-time messaging and event-driven architectures.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Scales automatically based on demand and offers low-latency message delivery.&lt;/li&gt;
&lt;li&gt;  Offers fine-grained access controls for securing messages and topics.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cons:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Limited features compared to more advanced event streaming platforms like Apache Kafka.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Wrapping up
&lt;/h1&gt;

&lt;p&gt;Messaging queues have become the industry standard for building highly scalable, reliable, available, and resilient distributed systems. Their ability to decouple components, handle asynchronous processing, and provide fault tolerance has made them vital to modern application architectures.&lt;/p&gt;

&lt;p&gt;However, it's essential to understand the challenges associated with messaging queues. Proper handling of fault tolerance, idempotent actions, and message deduplication is crucial to avoid potential pitfalls.&lt;/p&gt;

&lt;p&gt;Considering these aspects, we can build systems that efficiently handle complex and demanding workloads!&lt;/p&gt;

&lt;h1&gt;
  
  
  How Amplication Fits in
&lt;/h1&gt;

&lt;p&gt;Adopting asynchronous communication models can improve your microservices architecture's scalability, reliability, and performance. But building a scalable microservice architecture requires a lot of planning and boilerplate, scaffolding, and repetitive coding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://amplication.com/" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt; is a code generator for backend services that generates all the repetitive parts of microservices architecture, including communication between services using message brokers with all the best practices and industry standards.&lt;/p&gt;

&lt;p&gt;You can build the foundation of your backend services with Amplication in minutes and focus on the business value of your product.&lt;/p&gt;

</description>
      <category>backend</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Node.js asynchronous flow control and event loop</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Tue, 08 Aug 2023 05:06:28 +0000</pubDate>
      <link>https://dev.to/amplication/nodejs-asynchronous-flow-control-and-event-loop-4aa5</link>
      <guid>https://dev.to/amplication/nodejs-asynchronous-flow-control-and-event-loop-4aa5</guid>
      <description>&lt;p&gt;Node.js's powerful asynchronous, event-driven architecture has revolutionized server-side development, enabling the creation of highly scalable and efficient applications through non-blocking I/O operations. However, grasping the inner workings of Node.js's asynchronous flow control and event loop can be quite challenging for newcomers and seasoned developers alike. In this article, we will explore the asynchronous nature of Node.js, including its core components like the Event Loop, Asynchronous APIs, and the Node.js Call Stack.&lt;/p&gt;

&lt;h1&gt;
  
  
  What is asynchronous flow in Node.js?
&lt;/h1&gt;

&lt;p&gt;Asynchronous flow refers to the way Node.js handles and executes ensuring that the main program flow remains unblocked. As a server-side runtime environment built on Chrome's V8 JavaScript engine, Node.js efficiently manages concurrent tasks and optimizes resource utilization. It achieves this by delegating many operations, such as file I/O, network requests, and database queries, to separate background threads, enabling the main thread to proceed with other tasks. Upon completion, the results from these background tasks are returned to the main thread, often using callbacks, promises, or async/await mechanisms. This approach allows Node.js to maintain responsiveness and scalability, making it a preferred choice for building non-blocking, high-performance applications that can handle multiple concurrent operations effectively.&lt;/p&gt;

&lt;p&gt;At the heart of Node.js's asynchronous flow is the event loop, a crucial component that plays a vital role in managing and executing tasks efficiently. The event loop is responsible for efficiently scheduling and executing asynchronous tasks. It constantly monitors the task queue, executing pending operations when the main thread becomes idle, further enhancing Node.js's responsiveness and enabling the seamless handling of concurrent tasks. Now, let's delve into the specifics of the Node.js event loop and understand how it drives the asynchronous flow of Node.js.&lt;/p&gt;

&lt;h1&gt;
  
  
  Event loop in Node.js
&lt;/h1&gt;

&lt;p&gt;The Event Loop constitutes a vital aspect of Node.js, enabling it to manage asynchronous operations efficiently. It maintains the application's responsiveness by continuously monitoring the Event Queue for pending events. Node.js's Event Loop follows a straightforward yet highly effective mechanism.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Event registration:&lt;/strong&gt; Whenever an asynchronous operation is initiated, such as reading a file or making a network request, the corresponding event is registered and added to the Event Queue.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Event loop execution:&lt;/strong&gt; The Event Loop perpetually checks the Event Queue for pending events. Upon completion of an event, it is dequeued, and its associated callback is added to the Node.js Call Stack for execution.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Callback execution:&lt;/strong&gt; The callbacks associated with the dequeued events are executed, enabling the application to respond to the events.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Non-blocking execution:&lt;/strong&gt; Node.js's Asynchronous APIs ensure that while waiting for an operation to complete, the application can continue executing other tasks, making it highly performant for I/O-intensive operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fnodejs-asynchronous-flow-control-and-event-loop%2F0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fnodejs-asynchronous-flow-control-and-event-loop%2F0.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Source: &lt;a href="https://www.geeksforgeeks.org/node-js-event-loop/" rel="noopener noreferrer"&gt;https://www.geeksforgeeks.org/node-js-event-loop/&lt;/a&gt;&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The above figure depicts an example of a Node.js event loop. Upon Node.js startup, the event loop is initialized, and the input script is processed. This input script might involve asynchronous API calls and the scheduling of timers.&lt;/p&gt;

&lt;p&gt;Node.js utilizes a dedicated library module called &lt;code&gt;libuv&lt;/code&gt; to handle asynchronous operations. This module, in conjunction with Node's underlying logic, manages a specialized thread pool known as the &lt;code&gt;libuv&lt;/code&gt; thread pool. The &lt;code&gt;libuv&lt;/code&gt; thread pool consists of four threads by default which are responsible for offloading tasks that are too resource-intensive for the event loop. Such tasks encompass I/O operations, opening and closing connections, and handling &lt;code&gt;setTimeouts&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When the &lt;code&gt;libuv&lt;/code&gt; thread pool completes a task, a corresponding callback function is invoked. This callback function takes care of any potential errors and performs other necessary operations. Subsequently, the callback function is added to the event queue. As the call stack becomes empty, events from the event queue are processed, allowing the callbacks to be executed by placing them onto the call stack.&lt;/p&gt;

&lt;p&gt;Node.js event loop comprises multiple phases, with each phase dedicated to a particular task. The below diagram shows a simplified overview of the different phases in the event loop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fnodejs-asynchronous-flow-control-and-event-loop%2F1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fnodejs-asynchronous-flow-control-and-event-loop%2F1.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Source: &lt;a href="https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick" rel="noopener noreferrer"&gt;https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick&lt;/a&gt;&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As the above diagram depicts there are different phases in the Node.js event loop to handle different types of operations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Timers:&lt;/strong&gt; Timers phase executes callbacks scheduled by &lt;code&gt;setTimeout()&lt;/code&gt; and &lt;code&gt;setInterval()&lt;/code&gt;. These callbacks will be triggered as soon as possible after the specified amount of time has elapsed. However, external factors like Operating System scheduling or the execution of other callbacks may introduce delays in their execution.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Pending callbacks:&lt;/strong&gt; This phase is dedicated to executing callbacks for specific system operations, like handling TCP errors.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Idle, prepare:&lt;/strong&gt; Idle phase is only used internally. In this phase, the event loop is not actively processing tasks, providing an opportunity to perform background operations like garbage collection.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Poll&lt;/strong&gt;: Two main functions are performed during the poll phase, calculating the appropriate duration for blocking and polling I/O, and processing events in the poll queue. When the event loop enters the poll phase without any scheduled timers, it checks the poll queue. If the queue contains callbacks, they are executed synchronously until the queue is empty or reaches a system-dependent limit. In the absence of callbacks in the poll queue, the event loop proceeds to either the check phase if &lt;code&gt;setImmediate()&lt;/code&gt; scripts are scheduled or waits for new callbacks to be added to the queue, executing them immediately. Once the poll queue becomes empty, the event loop checks if any timers have reached their time thresholds and, if so, moves back to the timers phase to execute their respective callbacks.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Check:&lt;/strong&gt; The check phase invokes any &lt;code&gt;setImmediate()&lt;/code&gt; callbacks that have been added to the queue. As the code is executed, the event loop will eventually reach the poll phase. However, if a callback has been scheduled using &lt;code&gt;setImmediate()&lt;/code&gt; and the poll phase becomes idle, the event loop will proceed directly to the check phase instead of waiting for poll events to occur.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Close callbacks:&lt;/strong&gt; When a socket or handle is closed suddenly, the &lt;code&gt;close&lt;/code&gt; event is emitted in this phase. However, if the closure is not immediate, the &lt;code&gt;close&lt;/code&gt;event will be emitted using &lt;code&gt;process.nextTick()&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Understanding Call Stack and Asynchronous APIs
&lt;/h1&gt;

&lt;p&gt;For a comprehensive understanding of Node.js's asynchronous flow control, it is essential to comprehend the Node.js Call Stack and its interaction with asynchronous APIs.&lt;/p&gt;

&lt;p&gt;The Call Stack functions as a data structure, keeping track of function calls within the program. When a function is invoked, it is added to the top of the stack, and upon completion, it is removed, following a last-in-first-out (LIFO) order. As shown in the diagram below Node.js will initially create a global execution context for the script and place it at the bottom of the stack, and then it will create a function execution context for each function called and place it on the stack. This execution stack is also known as the Call Stack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fnodejs-asynchronous-flow-control-and-event-loop%2F2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Fnodejs-asynchronous-flow-control-and-event-loop%2F2.png" alt=""&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Source: &lt;a href="https://www.javatpoint.com/javascript-call-stack" rel="noopener noreferrer"&gt;https://www.javatpoint.com/javascript-call-stack&lt;/a&gt;&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Furthermore, as Node.js is designed to operate asynchronously, it provides many APIs that use callbacks or promises to manage the results of asynchronous operations. When an asynchronous function is invoked, it is offloaded to the Node.js runtime environment, allowing the Event Loop to continue processing other tasks. Once the asynchronous operation completes, the associated callback is placed in the Callback Queue, awaiting the Event Loop's attention for execution. When the Call Stack is empty, the Event Loop picks the first callback from the Callback Queue and pushes it onto the Call Stack for execution. This approach ensures that asynchronous tasks do not block the main thread, contributing to the application's responsiveness.&lt;/p&gt;

&lt;h1&gt;
  
  
  Benefits of Asynchronous Programming in Node.js
&lt;/h1&gt;

&lt;p&gt;The asynchronous nature of Node.js brings forth several notable advantages that benefit developers and the applications they build such as,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Scalability and Concurrency:&lt;/strong&gt; Node.js's asynchronous nature allows it to handle a large number of concurrent connections efficiently. By leveraging non-blocking I/O operations and asynchronous event handling, Node.js can serve multiple clients simultaneously without consuming excessive resources.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Resource Efficiency:&lt;/strong&gt; Node.js utilizes a single-threaded event loop to handle multiple concurrent connections, reducing the overhead of creating and managing threads for each connection. This approach results in better memory utilization and improved resource efficiency.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Improved Responsiveness:&lt;/strong&gt; Asynchronous operations prevent the application from becoming unresponsive during time-consuming tasks, leading to enhanced user experiences.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Simplified Code:&lt;/strong&gt; The asynchronous model allows developers to write clean and concise code, avoiding complex control flow and the notorious "callback hell." Asynchronous APIs, along with the adoption of promises and async/await, promote more readable and maintainable codebases.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Easier Debugging:&lt;/strong&gt; Asynchronous operations in Node.js are designed to provide meaningful error messages, making identifying and troubleshooting issues easier.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Common Pitfalls and How to Avoid Them
&lt;/h1&gt;

&lt;p&gt;While the asynchronous nature offers remarkable benefits, it also introduces certain challenges that developers should be mindful of. Here are some common pitfalls and recommended strategies to overcome them:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Blocking the event loop:&lt;/strong&gt; Running CPU-intensive tasks can cause the event loop becoming blocked to other incoming events or callbacks causing slow application performance, reduced concurrency, and a negative user experience. By utilizing asynchronous APIs and non-blocking I/O operations, enabling tasks to be delegated to the background while the event loop continues processing other events efficiently can maintain a responsive event loop.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Callback hell:&lt;/strong&gt; Chaining multiple callbacks can lead to deeply nested and hard-to-read code. Adopting promises or async/await can significantly improve code readability and maintainability.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Uncaught exceptions:&lt;/strong&gt; Unhandled errors in asynchronous operations can crash the application. Always implement proper error-handling mechanisms to gracefully handle exceptions and prevent application failures.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Memory leaks:&lt;/strong&gt; Improper management of event listeners can result in memory leaks. Ensure to remove event listeners when they are no longer needed to prevent unnecessary memory consumption.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Overuse of asynchronous operations:&lt;/strong&gt; Not all operations need to be asynchronous. Carefully choose synchronous and asynchronous operations to strike the right balance between performance and code clarity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Asynchronous programming is a powerful paradigm that efficiently handles large-scale, concurrent operations. Node.js heavily relies on this approach to achieve remarkable concurrency and scalability. Its asynchronous nature, centered around the Event Loop, Asynchronous APIs, and the Node.js Call Stack, enables efficient management of asynchronous operations, delivering highly responsive applications.&lt;/p&gt;

&lt;p&gt;By embracing the benefits of asynchronous flow, developers can create high-performance and scalable applications that respond to events in real-time, providing users with a seamless and efficient experience. However, it is crucial to remain mindful of potential pitfalls, like blocking the event loop or getting lost in callback hell, and to adopt best practices to ensure smooth execution and error handling. Overall, Node.js's asynchronous nature offers a robust foundation for building modern, responsive server-side applications.&lt;/p&gt;

&lt;h1&gt;
  
  
  Node.js and Amplication
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://amplication.com/" rel="noopener noreferrer"&gt;Amplication&lt;/a&gt; can automatically generate fully functional services based on TypeScript and Node.js to speed up your development process.&lt;/p&gt;

&lt;p&gt;Furthermore, Amplication can include technologies like NestJS, Prisma, PostgreSQL, MySQL, MongoDB, Passport, Jest, and Docker in the generated services. Hence, you can automatically create database connections, authentication and authorization features, unit tests, and ORMs for your Node.js service.&lt;/p&gt;

&lt;p&gt;You can find the getting started guide &lt;a href="https://docs.amplication.com/getting-started/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>node</category>
      <category>backend</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Create API with GraphQL, Prisma, and MongoDB</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Thu, 13 Jul 2023 09:23:13 +0000</pubDate>
      <link>https://dev.to/amplication/create-api-with-graphql-prisma-and-mongodb-5g8c</link>
      <guid>https://dev.to/amplication/create-api-with-graphql-prisma-and-mongodb-5g8c</guid>
      <description>&lt;p&gt;GraphQL, Prisma, and MongoDB have become the go-to options for building highly scalable APIs for modern web applications. With GraphQL, developers can easily define and request the precise data they need, while Prisma and MongoDB simplify database interactions.&lt;/p&gt;

&lt;p&gt;This article aims to provide a comprehensive guide to building a robust API with GraphQL, Prisma, MongoDB, and Node.js. But, I will take a different path than the traditional approach and introduce &lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt; to demonstrate how to further simplify the process of creating a GraphQL API.&lt;/p&gt;

&lt;h1&gt;
  
  
  GraphQL
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://graphql.org/"&gt;GraphQL&lt;/a&gt; is a server-side runtime and a query language for APIs. It allows developers to define queries that define the exact structure of the data they require, eliminating the over-fetching and under-fetching of data. Hence, it has become popular over REST APIs in modern application development.&lt;/p&gt;

&lt;h1&gt;
  
  
  Prisma
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.prisma.io/"&gt;Prisma&lt;/a&gt; is an open-source ORM that streamlines database access in your applications. It offers a range of tools and features to simplify database access by abstracting away the database-specific complexities. This allows developers to seamlessly interact with multiple database systems like PostgreSQL, MySQL, SQLite, and SQL Server without writing specific code tailored to each database.&lt;/p&gt;

&lt;h1&gt;
  
  
  MongoDB
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/"&gt;MongoDB&lt;/a&gt; is a highly scalable document-type database. Its flexible schema approach allows developers to adapt and modify the data structure as needed, providing agility in application development.&lt;/p&gt;

&lt;h1&gt;
  
  
  Amplication
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt; is an open-source platform that automatically generates APIs and clients based on pre-defined data models. It streamlines the development process by minimizing repetitive coding tasks and boilerplate code, enabling developers to focus on building backend services more efficiently.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to Create API with GraphQL, Prisma, and MongoDB
&lt;/h1&gt;

&lt;p&gt;To demonstrate a combination of these technologies, I have used a small inventory management process to implement. The scenario will contain parts and packets where a packet contains parts. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1 - Creating a project with Amplication
&lt;/h2&gt;

&lt;p&gt;The first step is creating a project with Amplication. You will see an option to create a New project in the bottom left corner of the Amplication dashboard. It will open a modal to enter a project name, and click the &lt;strong&gt;Create&lt;/strong&gt; &lt;strong&gt;new Project&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MU-CP_Cc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MU-CP_Cc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/00.png" alt="" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  Step 2 - Create and setup a service and connect it to the Git repository
&lt;/h2&gt;

&lt;p&gt;Click on &lt;strong&gt;Add Resource&lt;/strong&gt; button and select &lt;strong&gt;the Service&lt;/strong&gt; option from the dropdown. It will redirect you to a page to enter a name for the service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kj5Elw0h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kj5Elw0h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/01.png" alt="" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Then, you will be asked to &lt;a href="https://docs.amplication.com/sync-with-github/#creating-a-new-repository-in-github"&gt;connect a git repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qN3MrPg1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qN3MrPg1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/02.png" alt="" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Next, you need to to choose between GraphQL and REST. In this case, I will choose GraphQL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1xk7fXqb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1xk7fXqb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/03.png" alt="" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Next, Amplication gives us two options to build the project structure. If you plan to create multiple services in the repo, choose the Monorepo option. In this case, we only have GraphQL API service to create. So, I will be selecting a Polyrepo here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YI1xYxOa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YI1xYxOa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/04.png" alt="" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Then you have to choose the database option. In this project, we are using MongoDB as the database provider.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--we0jtu2v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--we0jtu2v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/05.png" alt="" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Then, it will ask you to select a template to generate entities automatically. In this case, we will define the data model later, so select "Empty".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_zjH-GsK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_zjH-GsK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/06.png" alt="" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Finally, you can include authentication for your service by selecting the Include Auth Module option. It will automatically generate the authentication code for your service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8sS80GRg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8sS80GRg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/07.png" alt="" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  Step 3 - Defining the data model with Amplication
&lt;/h2&gt;

&lt;p&gt;Amplication is using Prisma for the ORM. Prisma simplifies database operations and integrates seamlessly with GraphQL. In this section, we will define our data model using Amplication, and it will generate models with Prisma's declarative syntax.&lt;/p&gt;

&lt;p&gt;You can create new entities by clicking the &lt;strong&gt;Add Entity&lt;/strong&gt; button in the Entities tab. Here, I'm creating two entities that have a relation with each other.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Part model&lt;/strong&gt; - This consists of the fields - id, description, weight, color, created time, and last modified time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vFf2dg91--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vFf2dg91--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/08.png" alt="" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Packet model&lt;/strong&gt; - This consists of the fields -  id, name, parts, created time, and last modified time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8EXMGlVt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/09.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8EXMGlVt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/09.png" alt="" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;You can also define the relationship between the two entities through Amplication. For that, open the Part entity, and create a field named "packet". It will automatically be set as a relation field. Navigate back to the "Packet" entity, and you can see that you also have a new relation field called "Parts". &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BGHJFkha--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BGHJFkha--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/10.png" alt="" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;One &lt;strong&gt;Packet&lt;/strong&gt; can be related to many &lt;strong&gt;Parts&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FlFpAdGm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FlFpAdGm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/11.png" alt="" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;One &lt;strong&gt;Part&lt;/strong&gt; can be related to one &lt;strong&gt;Packet.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HRW8pXsc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HRW8pXsc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/12.png" alt="" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  Step 4: Commit changes
&lt;/h2&gt;

&lt;p&gt;Once all the entities are created, click the &lt;strong&gt;Commit changes &amp;amp; build&lt;/strong&gt; button to sync the changes with the &lt;a href="https://github.com/ChameeraD/inventory-graphqlapi/tree/main/server"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Get the source code
&lt;/h2&gt;

&lt;p&gt;Navigate to the GitHub repo by clicking the &lt;strong&gt;Open With GitHub&lt;/strong&gt; button and clone the source code to your local machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5U0ucB1K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5U0ucB1K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/13.png" alt="" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Amplication has successfully generated all the required files and boilerplate code for you. For example, &lt;a href="https://github.com/ChameeraD/inventory-graphqlapi/blob/main/server/prisma/schema.prisma"&gt;&lt;strong&gt;prisma/prisma.schema&lt;/strong&gt;&lt;/a&gt; file contains the data models we created from the Amplication UI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;datasource mongo {
  provider = "mongodb"
  url      = env("DB_URL")
}

generator client {
  provider = "prisma-client-js"
}

model User {
  createdAt DateTime @default(now())
  firstName String?
  id        String   @id @default(auto()) @map("_id") @mongo.ObjectId
  lastName  String?
  password  String
  roles     Json
  updatedAt DateTime @updatedAt
  username  String   @unique
}

model Part {
  color       String?
  createdAt   DateTime @default(now())
  description String?
  id          String   @id @default(auto()) @map("_id") @mongo.ObjectId
  packet      Packet?  @relation(fields: [packetId], references: [id])
  packetId    String?
  updatedAt   DateTime @updatedAt
  weight      String?
}

model Packet {
  createdAt DateTime @default(now())
  id        String   @id @default(auto()) @map("_id") @mongo.ObjectId
  name      String?
  parts     Part[]
  updatedAt DateTime @updatedAt
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, you can customize the application based on your specific requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Install npm packages
&lt;/h2&gt;

&lt;p&gt;Once the application is ready, you need to install npm packages and dependencies using &lt;code&gt;npm install&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7: Start the Docker container for the database
&lt;/h2&gt;

&lt;p&gt;Start the Docker container to run the database using the below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run docker:db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create the application schema on the database using Prisma. You can use the below migration commands for that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run prisma:generate 
npm run db:init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 8: Run the application
&lt;/h2&gt;

&lt;p&gt;Finally, you can run the application using &lt;code&gt;npm run start&lt;/code&gt; command. It will start the server at &lt;strong&gt;&lt;a href="http://localhost:3000"&gt;http://localhost:3000&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, you can access the GraphQL API using the GraphQL Playground available at &lt;strong&gt;&lt;a href="http://localhost:3000/graphql"&gt;http://localhost:3000/graphql&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f77TpE83--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f77TpE83--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/14.png" alt="" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note: All the above instructions are commands are also listed in the README.md file in the root of the server code&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 9: Testing the APIs using GraphQL playground
&lt;/h2&gt;

&lt;p&gt;Once the GraphQL server is up, you can test your APIs with the GraphQL Playground. The below examples show how to test the GraphQL APIs for creating a Part and retrieving all parts. Now, you can customize the application based on your specific requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sample Mutation API
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mutation {
  createPart(data:{
    color: "orange"
    description: "orange part 001"
    weight: "25"
  }) {
    color
    createdAt
    description
    id
    updatedAt
    weight
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IAhr9La8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IAhr9La8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/15.png" alt="" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "data": {
        "createPart": {
            "color": "orange",
            "createdAt": "2023-06-19T06:17:48.964Z",
            "description": "orange part 001",
            "id": "648ff30c6af532b09aeb5562",
            "updatedAt": "2023-06-19T06:17:48.964Z",
            "weight": 25
        }
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Sample Query API
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
query{
    parts{
        color
        createdAt        
        description
        id
        updatedAt
        weight
        packet{
             id
         }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WJm0ghIw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WJm0ghIw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/create-api-with-graphql-prisma-and-mongodb/16.png" alt="" width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "data": {
        "parts": [
            {
                "color": "red",
                "createdAt": "2023-06-13T20:22:08.544Z",
                "description": "red part 001",
                "id": "6488cff0c0b65f41f03c4d3a",
                "updatedAt": "2023-06-13T20:22:08.544Z",
                "weight": 20,
                "packet": null
            },
            {
                "color": "green",
                "createdAt": "2023-06-13T20:22:26.267Z",
                "description": "green part 001",
                "id": "6488d002c0b65f41f03c4d3b",
                "updatedAt": "2023-06-13T20:22:26.267Z",
                "weight": 50,
                "packet": null
            },
            {
                "color": "yellow",
                "createdAt": "2023-06-13T20:25:05.531Z",
                "description": "yellow part 001",
                "id": "6488d0a1c0b65f41f03c4d3d",
                "updatedAt": "2023-06-13T20:25:05.531Z",
                "weight": 50,
                "packet": null
            },
        ]
    }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;As you can see, Amplication significantly simplifies the process of creating GraphQL APIs while reducing the effort you need to initialize a new project. You can effortlessly generate all the required files for multiple data models with a few straightforward steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt; is an open-source tool designed to streamline Node.js development by generating fully functional Node.js services. It goes beyond supporting GraphQL and MongoDB. With Amplication, you can create Node.js APIs and services using other technologies like PostgreSQL, MySQL, Passport, Jest, and Docker. I strongly encourage you to &lt;a href="https://app.amplication.com/login"&gt;give Amplication a try&lt;/a&gt; and witness the impact it can have on your development workflow.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>backend</category>
      <category>programming</category>
      <category>prisma</category>
    </item>
    <item>
      <title>How to Build a Node.js GraphQL API With NestJS and Prisma</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Thu, 06 Jul 2023 07:59:11 +0000</pubDate>
      <link>https://dev.to/amplication/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma-1ehg</link>
      <guid>https://dev.to/amplication/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma-1ehg</guid>
      <description>&lt;p&gt;Are you interested in building highly scalable and performant server-side applications? Have you considered using Node.js, a widely used and robust runtime environment? Recently, developers have preferred using GraphQL with Node.js to build faster and more flexible APIs than the more traditional REST APIs.&lt;/p&gt;

&lt;p&gt;To fully leverage the potential of GraphQL, it is essential to have a reliable framework that can handle the complexities of a growing codebase and an Object Relational Mapping (ORM) tool that simplifies database interactions. NestJS and Prisma are a great couple that can provide exactly that. NestJS is a powerful framework specifically designed for building Node.js applications, while Prisma is an ORM tool that provides a type-safe API for querying databases in Node.js.&lt;/p&gt;

&lt;p&gt;This article aims to provide insights on building a Node.js GraphQL API using NestJS and Prisma while addressing some of the most frequently asked questions about these technologies.&lt;/p&gt;

&lt;p&gt;Many of you are likely well-versed in creating traditional Node.js applications. However, this article will present a unique approach to building a Node.js GraphQL API with NestJS and Prisma, utilizing the &lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt; platform to simplify the development process further. But before getting into action, let's get familiar with the technologies mentioned above.&lt;/p&gt;

&lt;h2&gt;
  
  
  GraphQL
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://graphql.org/"&gt;GraphQL&lt;/a&gt; is an open-source query language that provides a more efficient and flexible way to request and manipulate server data. In REST APIs, users must send multiple requests to different endpoints to retrieve various data. In contrast, GraphQL allows users to construct queries that define the exact shape and structure of the data they require, eliminating the over-fetching and under-fetching of data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prisma
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.prisma.io/"&gt;Prisma&lt;/a&gt; is an open-source database toolkit and ORM that provides tools and features to simplify database access in your applications. It abstracts the database-specific details, allowing developers to work with multiple database systems (such as PostgreSQL, MySQL, SQLite, and SQL Server) without writing specific code for each database.&lt;/p&gt;

&lt;h2&gt;
  
  
  NestJS
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://nestjs.com/"&gt;NestJS&lt;/a&gt; is an open-source framework for building efficient, scalable, and maintainable server-side applications using TypeScript. Angular inspires NestJS and uses TypeScript features like strong typing, decorators, and dependency injection to provide a robust architecture for building server-side applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amplication
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt; is an open-source platform that automatically generates APIs and clients based on pre-defined data models. It streamlines the development process by minimizing repetitive coding tasks and boilerplate code, enabling developers to focus on building backend services more efficiently.&lt;/p&gt;

&lt;p&gt;In this article, I will combine the power of Amplication, Prisma, GraphQL, and NestJS to create a simple Node.js service with three entities to manage my Blogs.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Steps
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1 - Create a New Project in Amplication
&lt;/h3&gt;

&lt;p&gt;Once you log in to the Amplication dashboard, you will see an option to create a New project in the bottom left corner. It will open a modal to enter a project name and click the &lt;strong&gt;Create new Project&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--plxPlcpA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/00.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--plxPlcpA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/00.png" alt="" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  Step 2 - Create a New Service
&lt;/h3&gt;

&lt;p&gt;Then, click the &lt;strong&gt;Add Resource&lt;/strong&gt; button and select &lt;strong&gt;the Service&lt;/strong&gt; option from the dropdown.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ac-b2Dg7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ac-b2Dg7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/01.png" alt="" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;It will redirect you to a new wizard to configure the new service. First, you need to enter a name for the service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KLJo40ps--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KLJo40ps--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/02.png" alt="" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Then, &lt;a href="https://docs.amplication.com/sync-with-github/#creating-a-new-repository-in-github"&gt;connect to a GitHub repository&lt;/a&gt; where you want to get the generated code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z77w30lo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z77w30lo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/03.png" alt="" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Next, you need to choose between GraphQL and REST. Since this article is about GraphQL, I have only enabled GraphQL API and Admin UI options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CZ22WfFr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CZ22WfFr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/04.png" alt="" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Next, you can select between Monorepo and Polyrepo based on your project and team requirements. For this example, you can leave the default settings as it is.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_kKz_ZEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_kKz_ZEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/05.png" alt="" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Next, you must select between PostgreSQL, MongoDB, and MySQL for database options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--agGAzZ0t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--agGAzZ0t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/06.png" alt="" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Also, Amplication can automatically generate entities for your database models if you prefer. We will define the data model for our use case later, so select "Empty."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zL5cXVnS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zL5cXVnS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/07.png" alt="" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Finally, you can include authentication for your service. If you opt to have the auth module, Amplication will automatically generate the authentication code for your service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SQ1Z0PVp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SQ1Z0PVp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/08.png" alt="" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Once the service is created, you will see a window like the one below. Click on &lt;strong&gt;Create entities for my service&lt;/strong&gt; option to start creating entities for the new service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7WEuCjWr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/09.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7WEuCjWr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/09.png" alt="" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  Step 3 - Create Entities
&lt;/h3&gt;

&lt;p&gt;By default, Amplication creates a user entity to manage users related to your service. You can easily create new entities by clicking the &lt;strong&gt;Add Entity&lt;/strong&gt; button in the Entities tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aFq7iOJp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aFq7iOJp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/10.png" alt="" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;First, you need to enter the entity name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VewO-8rj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VewO-8rj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/11.png" alt="" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Then, you will get a window like the one below where you can create fields for the entity. For each field, you can configure properties like uniqueness, required or not, searchability, data type, max length, etc.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P67u0Aw3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P67u0Aw3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/12.png" alt="" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;After creating the models, you can access the Amplication UI and review them to ensure they have been generated correctly per your expectations. As explained earlier, I have created three entities for my &lt;strong&gt;BlogService&lt;/strong&gt;: &lt;strong&gt;Blog&lt;/strong&gt;, &lt;strong&gt;Publication&lt;/strong&gt;, and &lt;strong&gt;Author&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rdE3Hxw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rdE3Hxw5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/13.png" alt="" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  Step 4: Commit Changes
&lt;/h3&gt;

&lt;p&gt;Once all the entities are created, click the &lt;strong&gt;Commit changes &amp;amp; build&lt;/strong&gt; button to sync the changes with the &lt;a href="https://github.com/ChameeraD/amplication-example"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Get the Source Code
&lt;/h3&gt;

&lt;p&gt;Now you can navigate to the GitHub repo by clicking the &lt;strong&gt;Open With GitHub&lt;/strong&gt; button and clone the source code to your local machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lMhKWPKj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lMhKWPKj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/14.png" alt="" width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;That concludes the process. Amplication has successfully generated all the required files and boilerplate code for you. For example, the &lt;strong&gt;amplication-example/apps/blog-service/src/blog/base&lt;/strong&gt; folder contains the Blog model, DTOs, GraphQL resolver, Service, and tests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--z23WWMaA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--z23WWMaA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/15.png" alt="" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;Now you can open the code with VSCode, and customize it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Install npm Packages
&lt;/h3&gt;

&lt;p&gt;Once the application is ready, you must install npm packages and dependencies using the &lt;code&gt;npm install&lt;/code&gt; command.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 7: Start the Docker Container for the Database
&lt;/h3&gt;

&lt;p&gt;Start the Docker container to run the database using the below command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;npm run docker:db
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then create the application schema on the database using Prisma. You can use the below migration commands for that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;npm run prisma:generate 
npm run db:init
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 8: Run the Application
&lt;/h3&gt;

&lt;p&gt;Finally, you can run the application using the &lt;code&gt;npm run start&lt;/code&gt; command. It will start the server at &lt;code&gt;http://localhost:3000&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Also, you can access the GraphQL server through &lt;code&gt;http://localhost:3000/graphql&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--20CW8JlT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--20CW8JlT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/how-to-build-a-nodejs-graphql-api-with-nestjs-and-prisma/16.png" alt="" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  Fun Fact
&lt;/h2&gt;

&lt;p&gt;One fascinating fact about the &lt;a href="https://amplication.com/blog"&gt;Amplication blog&lt;/a&gt; is that its backend is built entirely using Amplication itself. By utilizing Amplication for our own blog, we are not only showcasing its potential but also benefiting from its efficiency and productivity-enhancing features. It's worth mentioning that the entire codebase of the Amplication blog's backend is publicly available on GitHub, allowing developers to explore, learn, and contribute to its development. You can find the code repository for the blog server at &lt;a href="https://github.com/amplication/blog-server"&gt;https://github.com/amplication/blog-server&lt;/a&gt;. We believe in transparency and collaboration, and making the code available underscores our commitment to fostering an open-source community. Feel free to delve into the codebase and witness firsthand how Amplication powers the backend of our blog.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, Amplication dramatically simplifies generating GraphQL APIs using Nest.JS and Prisma. With just a few steps, developers can quickly generate all the necessary files for multiple data models.&lt;/p&gt;

&lt;p&gt;Amplication is a free, open-source tool that accelerates development by creating fully functional Node.js services. In addition to Nest.js and Prisma, it supports several other technologies such as PostgreSQL, MySQL, MongoDB, Passport, Jest, and Docker. Therefore, I encourage you to &lt;a href="https://app.amplication.com/"&gt;try Amplication&lt;/a&gt; and experience the difference it can make in your development workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q1: What is Prisma?
&lt;/h3&gt;

&lt;p&gt;Prisma is a widely recognized ORM (Object-Relational Mapping) tool that smoothly integrates with NestJS. Using Prisma, developers can easily create database schemas with a simple syntax and construct type-safe APIs for querying databases. Additionally, Prisma facilitates writing database queries and resolving database-related errors in NestJS applications, making it an efficient tool for developers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q2: Does NestJS use GraphQL?
&lt;/h3&gt;

&lt;p&gt;Indeed, NestJS has built-in support for GraphQL. It provides two methods for constructing applications with GraphQL: code-first and schema-first. The code-first approach employs decorators and TypeScript classes to produce a GraphQL schema, whereas the schema-first approach utilizes GraphQL SDL (Schema Definition Language).&lt;/p&gt;

&lt;h3&gt;
  
  
  Q3: How Is GraphQL Different From REST?
&lt;/h3&gt;

&lt;p&gt;GraphQL and REST are 2 of the most popular approaches in building APIs. However, there are some significant differences between them. You can find a detailed comparison between GraphQL and REST &lt;a href="https://amplication.com/blog/7-key-differences-between-graphql-and-rest-apis"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>node</category>
      <category>graphql</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>What is Open-source Software: Everything you wanted to know</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Tue, 30 May 2023 05:52:38 +0000</pubDate>
      <link>https://dev.to/amplication/what-is-open-source-software-everything-you-wanted-to-know-3na2</link>
      <guid>https://dev.to/amplication/what-is-open-source-software-everything-you-wanted-to-know-3na2</guid>
      <description>&lt;p&gt;In the dynamic realm of technology, where innovation reigns supreme, one approach has gained remarkable traction—open-source software. With its ethos of transparency, collaboration, and limitless possibilities, open-source software has revolutionized the way we develop, share, and customize applications. From operating systems to data analysis tools, its impact spans across diverse domains. Join us on a journey as we delve into the history, advantages, and vibrant ecosystem of open-source software, uncovering how it empowers developers and businesses alike. Discover why open-source software is not just a buzzword but a transformative force shaping the digital landscape.&lt;/p&gt;

&lt;p&gt;Open-source software (OSS) is code distributed under a license that allows the users to access the software's source code and modify and build new features upon the existing software to meet the user requirements. Open-source software applications are vast and varied, ranging from operating systems, web browsers, and mobile applications to data analysis tools, machine learning frameworks, and cloud computing platforms. This article will discuss the open-source software model, history, and advantages of using OSS. We will also examine the difference between open-source software and closed-source software.&lt;/p&gt;

&lt;h2&gt;
  
  
  History of Open-Source Software
&lt;/h2&gt;

&lt;p&gt;In earlier days, academics and corporate researchers working in collaboration primarily implemented software. Therefore, everybody shared software often due to the openness and cooperation already established in academia. However, by the early 1970s, software development had become more expensive, and corporations started licensing and selling software products. IBM was one of the market leaders at the time. By the late 1970s and early 1980s, software vendors began regularly charging for software applications and licensing and restricting new software development on the existing platforms. Furthermore, vendors began to distribute only the machine executables of the software without the source code.&lt;/p&gt;

&lt;p&gt;Among the many developers unhappy with these developments was Richard Stallman, the founder of the Free Software Foundation, who initiated the GNU project intending to build a complete, free operating system. Furthermore, they created the GNU General Public License (GPL), allowing users to copy, modify and redistribute the software for free as long as their new version is distributed under the same license.&lt;/p&gt;

&lt;p&gt;With the rise of the Internet and software built around it, collaborative development became much easier, and open-source software had massive growth. By 1991 Linus Torvalds announced the project to create an operating system kernel, and he released the first version under the name Linux in 1994. Linux gained much traction since it was a free and open-source alternative to the other proprietary operating systems.&lt;br&gt;
The introduction of the Apache web server in 1995 quickly made it one of the most popular web servers in the world, enabling developers to host their web applications without any cost for the web server and yet be used by millions of websites and web applications. In addition, the open-source license of the Apache web servers enabled the developers to manipulate the source code to improve its functionality.&lt;/p&gt;

&lt;p&gt;Another powerful open-source software was the MySQL database, introduced in 1995. MySQL is among the most widely used databases, which enables developers to modify the source code to improve performance and reliability freely.&lt;/p&gt;

&lt;p&gt;In 1998, Open Source Initiative was founded to promote and support open-source software, paving the way to the current open-source software ecosystem as we know it. In addition, open-source software enabled the innovation of new technologies based on open-source software, with the collaboration of a massive open-source community. Amplication is one such example of an innovative open-source tool that can help developers build high-quality applications efficiently. In essence, open-source software became one of the factors influencing the expansion of the Internet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open-Source Software Development Model
&lt;/h3&gt;

&lt;p&gt;The open-source software model is a collaborative approach to building software. Contributors can be anyone from individual developers to large corporations. Developers contribute to the project via collaborative platforms like GitHub, and all contributors are allowed to modify the source code in the repository as long as the modifications follow the license terms and software standards.&lt;/p&gt;

&lt;p&gt;When creating a new project, all contributors have access to add new features, modify the source code, and create pull requests to make the software better. The project maintainers review these pull requests and merge them into the repository. Then, the codebase maintainers can create a new release with the newly added features.&lt;/p&gt;

&lt;p&gt;The open-source software model relies on the community to build and improve the software with feedback. But some companies also specialize in building open-source software, such as Red Hat, Canonical, SUSE, Docker, and HashiCorp.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open Source Vs. Closed Source Software
&lt;/h2&gt;

&lt;p&gt;Open and closed sources are software models that differ in licensing, source code availability, and many other factors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Source code and development
&lt;/h3&gt;

&lt;p&gt;In the open-source software model, source code is freely accessible and developed by a large community of developers. Whereas in closed-source software, the source code is confidential, hidden from the users, and maintained by the owning company.&lt;/p&gt;

&lt;h3&gt;
  
  
  Licensing
&lt;/h3&gt;

&lt;p&gt;Open-source software is distributed under an open-source license such as GNU General Public License, which makes it accessible for anyone to modify and distribute under the same license. But in closed-source software, they are distributed under a proprietary license which restricts any unauthorized modifications or distribution of the software.&lt;/p&gt;

&lt;h3&gt;
  
  
  Maintenance
&lt;/h3&gt;

&lt;p&gt;The community maintains open-source software and decides the required features and the road map according to their needs. On the other hand, closed-source software is based on proprietary models where the owner corporation owns and manages the software's features and road map.&lt;/p&gt;

&lt;h3&gt;
  
  
  Features, support, and cost
&lt;/h3&gt;

&lt;p&gt;Open-source software provides flexibility and customization since users can modify the source code. But more support might be needed when a unique issue arises. On the other hand, whereas closed software is expensive, it provides support for the software. Therefore, it may have advanced features compared to its open-source counterparts.&lt;/p&gt;

&lt;p&gt;There are both pros and cons to both software models. Therefore, choosing the correct software model depends on the specific use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Open-Source Software
&lt;/h2&gt;

&lt;p&gt;Open-source software provides many benefits, such as:&lt;/p&gt;

&lt;h3&gt;
  
  
  Reduced expenses
&lt;/h3&gt;

&lt;p&gt;Most open software is free and significantly less expensive than commercial software alternatives. Therefore, small businesses and start-ups can benefit considerably from open-source software.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customizability
&lt;/h3&gt;

&lt;p&gt;Open-source software enables users to modify the software as needed since the source code is freely accessible. Therefore, open-source software is an excellent fit for businesses that need custom software tailored to their specific needs without reinventing the wheel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rapid and innovative development
&lt;/h3&gt;

&lt;p&gt;Since a large community creates open-source software, it can result in quick and innovative development cycles because the software is built via teamwork.&lt;/p&gt;

&lt;h3&gt;
  
  
  Transparency and security
&lt;/h3&gt;

&lt;p&gt;Since OSS is developed through a collaborative process involving a vast community, any security flaws or defects in the program can be found quickly. Moreover, the transparency of the source code can be easily verified because the source code is publicly available.&lt;/p&gt;

&lt;p&gt;Overall, OSS provides many benefits ranging from cost savings to transparency and security of the software. In addition, by leveraging the knowledge and expertise of a large global community, OSS can be developed much more efficiently while serving a broader range of perspectives.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applications of Open-Source Software
&lt;/h2&gt;

&lt;p&gt;OSS has a wide range of applications, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Operating systems: Open-source operating systems like Linux provide an excellent alternative to their commercial counterparts like Windows.&lt;/li&gt;
&lt;li&gt;Web servers: Many web servers like Apache and Nginx are OSS and highly used to build websites and web applications.&lt;/li&gt;
&lt;li&gt;Database Management: Database software is essential to building applications, and many popular open-source alternatives are available for storing and managing data, such as MySQL and PostgreSQL.&lt;/li&gt;
&lt;li&gt;Development tools: Several tools are utilized when implementing software, and many popular open-source development tools are available to ease the development process, such as Git and languages like Python. In particular, Amplication is a powerful open-source development tool that can help streamline development. If you're a developer looking to accelerate your software development process, consider checking out Amplication and contributing to the growing open-source community.&lt;/li&gt;
&lt;li&gt;Security: OpenSSL and OpenSSH are highly used open-source software in secure communication and data encryption.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Open Source and Amplication
&lt;/h2&gt;

&lt;p&gt;Amplication is an open-source tool for developing software that anyone can access, use, edit, and distribute under the rules of the license. Amplication's open-source nature also makes it a transparent and collaborative tool. Users can audit the source code and help identify and fix any bugs or security issues found in the source code. And by utilizing the power of the open-source nature, Amplication can benefit from a wide range of perspectives and expertise, resulting in higher-quality code and features. Amplication's open-source nature makes it a powerful and accessible tool for developers who want to build and deploy scalable web applications quickly and efficiently. It also aligns with the philosophy of open-source software, which emphasizes collaboration, transparency, and accessibility.&lt;/p&gt;

&lt;p&gt;If you like the work we do, consider &lt;a href="https://github.com/amplication/amplication"&gt;giving us a 🌟 on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Open-source software is a valuable and vital part of the technology ecosystem. Since the dawn of the Internet era, open-source software has contributed significantly to shaping the development and innovation of technologies. Therefore, though issues need to be addressed, open-source software continues to gain ground as an attractive alternative to commercialize software for small businesses or anyone looking to develop tailor-made software for their specific needs. Therefore, irrespective of your role, it is essential to understand the basics of open-source software to make informed decisions when choosing software for your business or personal life.&lt;/p&gt;

&lt;p&gt;I hope you have found this article helpful. Thank you!&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Q1: What's the difference between open-source and free software?
&lt;/h3&gt;

&lt;p&gt;The difference between open source and free software lies in the philosophies. Free software advocates user freedom and ethical software-related considerations, in contrast to open-source software, which focuses mainly on the collaborative development of software.&lt;br&gt;
Hence, open source is a way of building software, whereas free software is a social movement.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q2: What is an example of free and open-source software?
&lt;/h3&gt;

&lt;p&gt;The Linux operating system is a free and open-source software. Other software includes LibreOffice, Mozilla Firefox, VLC Media Player, Apache web server, Python, and many more.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q3: What is an example of open-source software?
&lt;/h3&gt;

&lt;p&gt;OpenOffice is an open-source alternative to Microsoft Office. Many other open-source tools exist, such as Amplication, Git, and WordPress.&lt;/p&gt;

&lt;h3&gt;
  
  
  Q4: Where is open source used?
&lt;/h3&gt;

&lt;p&gt;Open-source software is used in various industries and domains, such as software development, web servers, cloud computing, multimedia, education, etc. In addition, many developers use open-source tools and libraries to build software products.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>community</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>7 Tips to Build Scalable Node.js Applications</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Tue, 16 May 2023 06:54:21 +0000</pubDate>
      <link>https://dev.to/amplication/7-tips-to-build-scalable-nodejs-applications-3b35</link>
      <guid>https://dev.to/amplication/7-tips-to-build-scalable-nodejs-applications-3b35</guid>
      <description>&lt;p&gt;Scaling should not be an afterthought when it comes to building software. As the number of users of an application increases, the application should scale and handle the increased payloads effectively.&lt;/p&gt;

&lt;p&gt;Many technologies can be used to build such scalable applications, and Node.js is a popular choice among the rest. Node.js is a JavaScript framework created on Chrome's V8 JavaScript engine and, if utilized correctly, can be used to build highly scalable mission-critical applications. This article will discuss several tips which can be helpful when it comes to building scalable applications using Node.js.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Worker threads &amp;amp; concurrency
&lt;/h2&gt;

&lt;p&gt;Node.js executes JavaScript code in a single-threaded model. However, Node.js can function as a multithreaded framework by utilizing the &lt;a href="https://libuv.org/"&gt;libuv&lt;/a&gt; C library to create hidden threads (see the &lt;a href="https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick/"&gt;event loop&lt;/a&gt;) which handle I/O operations, and network requests asynchronously. But, CPU-intensive tasks such as image or video processing can block the event loop and prevent subsequent requests from executing, increasing the application's latency.&lt;/p&gt;

&lt;p&gt;Therefore, to handle such scenarios, worker threads were introduced in Node.js v10 as an experimental feature and a stable version was released in Node.js v12.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Does The Worker Thread Work?
&lt;/h3&gt;

&lt;p&gt;A worker thread is an execution thread within a Node.js process with an isolated environment consisting of an event loop. This ensures it can run parallel with other threads to perform expensive operations without blocking the main event loop.&lt;/p&gt;

&lt;p&gt;The parent thread creates worker threads to execute resource-intensive tasks isolated from other threads. This ensures that the parent thread operates smoothly without blocking any operations.&lt;/p&gt;

&lt;p&gt;Creating a worker thread is as simple as importing the &lt;code&gt;worker_threads&lt;/code&gt; library and creating a new object.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;isMainThread&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;parentPort&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;worker_threads&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isMainThread&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Main thread&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Starting worker threads...&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Start a worker thread&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;worker1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;__filename&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Send data to the worker threads&lt;/span&gt;
  &lt;span class="nx"&gt;worker1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;postMessage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;end&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;99&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="c1"&gt;// Listen for messages from the worker threads&lt;/span&gt;
  &lt;span class="nx"&gt;worker1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Main thread received message from worker &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Worker thread&lt;/span&gt;
  &lt;span class="nx"&gt;parentPort&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Worker &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; received data: start=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;start&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;, end=&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Perform a computationally expensive task&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;start&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// Send the result back to the main thread&lt;/span&gt;
    &lt;span class="nx"&gt;parentPort&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;postMessage&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The snippet above depicts a real example of a worker thread in Node.js.&lt;/p&gt;

&lt;p&gt;The primary (parent) thread creates a worker thread using the code in the same file and then passes the data to the worker using the message channel. Hereafter, the worker thread executes the assigned task using the data sent via the message channel. After the expensive operation has finished its execution in the worker thread, the result is sent back to the main thread, where a callback function executes and processes the result.&lt;/p&gt;

&lt;p&gt;This example can be extended to multiple worker threads with different operations by creating more worker instances with varying locations of script given for the source parameter in the worker thread constructor. Similarly, any CPU-intensive tasks can be distributed among different worker threads, ensuring that the main thread's event loop is not blocked.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Scaling out to multiple servers/clusters
&lt;/h2&gt;

&lt;p&gt;When an application faces a spike in demand, horizontal scaling can become handy. However, when it comes to Node.js applications, they can be scaled using two techniques.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scale with clustering.&lt;/li&gt;
&lt;li&gt;Scale across multiple servers.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Scale with clustering
&lt;/h3&gt;

&lt;p&gt;Clustering is commonly used to scale a Node.js application horizontally within the same server. It allows developers to take full advantage of a multi-core system while reducing application downtime and outages by distributing the requests among the child processes.&lt;/p&gt;

&lt;p&gt;You can create child processes that run concurrently with the application sharing the same port, which helps scale the application within the same server.&lt;br&gt;
Clusters can be implemented using the built-in &lt;code&gt;cluster&lt;/code&gt; module of Node.js or a library like &lt;a href="https://www.npmjs.com/package/pm2"&gt;PM2&lt;/a&gt;, widely used in production applications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cluster&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cluster&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;numCPUs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;os&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;cpus&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isMaster&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Master &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; is running`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Fork workers.&lt;/span&gt;
  &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;numCPUs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fork&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;exit,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;signal&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`worker &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;worker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; died`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Workers can share any TCP connection&lt;/span&gt;
  &lt;span class="c1"&gt;//, In this case,, it is an HTTP server&lt;/span&gt;
  &lt;span class="nx"&gt;HTTP&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createServer&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;writeHead&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello World&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nx"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Worker &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pid&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; started`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The snippet above highlights the use of horizontal scaling via clustering. It uses the built-in cluster module to fork and create child processes based on the available CPU count.&lt;/p&gt;

&lt;p&gt;As shown above, you must write a decent amount of code to handle clustering, even for a simple application. Unfortunately, this approach is not maintainable for complex mission-critical production applications.&lt;/p&gt;

&lt;p&gt;Therefore, libraries like PM2 can come in handy to handle all the added complexity behind the scenes. All we need to do is add PM2 on the server globally and run the application with PM2, which will spawn as many processes as possible in the system.&lt;/p&gt;

&lt;p&gt;Additionally, it provides many more features to manage and visualize the processes. More details on PM2 can be found in their &lt;a href="https://pm2.keymetrics.io/docs/usage/quick-start/"&gt;official documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scale with multiple servers
&lt;/h3&gt;

&lt;p&gt;A Node.js application can be scaled horizontally using multiple servers as long as the application is running as an independent process. A load balancer can be introduced to handle scaling across servers, where the load balancer will distribute the requests among servers depending on the load.&lt;/p&gt;

&lt;p&gt;However, using a single load balance is not a good practice as this creates a single point of failure. Therefore, it is best to introduce multiple load balancers pointing to the same server, depending on the application's criticality.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Breaking the application into microservices
&lt;/h2&gt;

&lt;p&gt;Microservices is a software architecture pattern that breaks the application into smaller, independent, functional units where each unit can function and scale independently without affecting other services. Additionally, it helps improve scalability in several ways.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scalability at the component level
&lt;/h3&gt;

&lt;p&gt;Since each service is independent of other services, they can be scaled independently, which reduces the complexity of scaling the whole application. Furthermore, it allows the developers to scale each microservice according to the client's requirements. For example, you can select and only scale the microservices with high traffic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Improved reliability
&lt;/h3&gt;

&lt;p&gt;Breaking down a large monolith application into smaller, independent microservices makes observing and tracing each service easier. This improved observability can help identify and isolate issues within the system, making it easier to implement failover strategies that minimize the impact on the overall application.&lt;/p&gt;

&lt;p&gt;Additionally, because each microservice operates independently, a failure in one service will only affect that specific service and not the entire application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Efficient scalability
&lt;/h3&gt;

&lt;p&gt;The small size of the services allows better resource utilization where needed. Hence, when the demand increases, the application can effectively scale while utilizing the available resources.&lt;/p&gt;

&lt;p&gt;Additionally, microservices help contribute to the overall efficiency of development time. It can also result in faster deployments and releases. Moreover, developers can also utilize third-party tools to simplify microservice generation.&lt;/p&gt;

&lt;p&gt;Tools like &lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt; help developers generate fully functional services based on Typescript and Node.js with popular technologies such as (but not limited to) NestJS, Prisma, PostgreSQL, GraphQL, and MongoDB while ensuring your services are highly scalable and secure.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Optimizing static assets through a CDN
&lt;/h2&gt;

&lt;p&gt;Node.js is fast when it comes to handling dynamic content like JSON objects. However, Node.js tends to underperform when you try to manage static assets such as images. Also, serving static content from the Node.js applications will be resource-intensive and could increase the application's latency.&lt;/p&gt;

&lt;p&gt;To avoid this, you can use services like Nginx or Apache to serve static content. In addition, these web servers can also optimize serving static content and cache it on the web server.&lt;/p&gt;

&lt;p&gt;Secondly, you can use specific services like Content Delivery Networks (CDNs) that are built to serve static content. CDNs have edge (POP) locations worldwide, bringing the static content closer to the users, thus improving the latency significantly while freeing the Node.js application to handle the dynamic requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  We'll give it a shot!
&lt;/h2&gt;

&lt;p&gt;Whoa, we're half way there... with this article. So, if you're pumped up after reading this article so far, let's keep the energy going and show some love for our project!&lt;/p&gt;

&lt;p&gt;We're living on a prayer, working hard to build amazing tools for the Node.js community. But we can't do it alone! We need your support to keep pushing the boundaries and creating extraordinary experiences for developers like you.&lt;/p&gt;

&lt;p&gt;Give us a shout-out and a 🌟 to the &lt;a href="https://github.com/amplication/amplication"&gt;Amplication repository on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
    &lt;br&gt;
  &lt;br&gt;🤘
  &lt;/p&gt;



&lt;h2&gt;
  
  
  5. Stateless authentication
&lt;/h2&gt;

&lt;p&gt;Stateless authentication is an authentication technique in which most session details, such as user properties, are saved on the client side, while the server has no information on any previous requests.&lt;/p&gt;

&lt;p&gt;It is generally implemented using token-based authentication via a JWT (JSON Web Token), which contains information (basic information + authorities) about the logged-in user. The JWT token is created and validated using a private and public key combination.&lt;/p&gt;

&lt;p&gt;The client will send the JWT token to the server on every request made to the server, where the server validates the token to authenticate and verify the request. This lets the Node.js backend scale without any dependencies on the user sessions, as the server has to match the signature against the hash of the payload and the header generated using the private key.&lt;/p&gt;

&lt;p&gt;This process reduces the workload on the server side to validate user requests and makes the authentication process scalable, as it has no dependencies on any specific server.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Use of timeouts for I/O operations
&lt;/h2&gt;

&lt;p&gt;The performance of an application can be affected by external services despite the app's resilience and high performance. Therefore, it is vital to implement timeouts to ensure that the application is not waiting for a response from a different service for an extended time. To do so, developers can utilize the built-in timeout operations in the third-party services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// set timeout to 5 seconds; after no response for 5 seconds, the request is timed out to process request faster.&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;instance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://test.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;code&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ECONNABORTED&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Timeout occurred&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The snippet above depicts an example of a &lt;code&gt;timeout&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;It ensures that all API requests that execute through Axios will automatically be timed-out if it doesn't receive a response from the service after 5 seconds. It is implemented using the timeout attribute of the library and avoids blocking subsequent requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Implement tracking, monitoring &amp;amp; observability to debug and solve performance issues actively
&lt;/h2&gt;

&lt;p&gt;Identifying performance bottlenecks of applications is crucial when building a scalable application. Therefore, implementing tracing, monitoring, and observability can provide insights into the actual bottlenecks and can help resolve the issues quickly.&lt;/p&gt;

&lt;p&gt;Tracing is the ability to track a request in each stage. It helps to trace the exact sequence of events in a request before it finishes its execution. This provides developers with the information they need to determine areas of concern regarding latency. They can inspect and fix parts of the request before it becomes a significant issue.&lt;/p&gt;

&lt;p&gt;Monitoring refers to tracking fundamental metrics of the application, such as response time, error rates, and resource utilization so that developers are kept aware of all activities within the application. They can then utilize the generated logs to troubleshoot errors within the application while using the metrics to identify areas of bottlenecks.&lt;/p&gt;

&lt;p&gt;Finally, developers can combine tracing + monitoring and gain a holistic standpoint of the application through observability. Observability helps developers determine the internal state based on the generated outputs (metrics + logs). Hence, developers can easily find and isolate performance issues and fix them quickly.&lt;/p&gt;

&lt;p&gt;Many open-source and commercial tools are available to implement tracing and monitoring in applications to improve the application's observability. These tools provide extensive insights into the application's performance and can help identify issues and improve the application's performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we discussed 7 tips for building scalable Node.js applications. Adopting these techniques will improve the performance and scalability of your Node.js application.&lt;/p&gt;

&lt;p&gt;However, it is essential to identify and isolate bottlenecks before jumping in with a solution. Also, it's critical to understand that bottlenecks could change as your application scales. Thus, you may notice hot spots in different application parts at different scales. So, it is vital to maintain observability and regularly monitor the critical factors to identify the issues early.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Amplication can help
&lt;/h2&gt;

&lt;p&gt;Amplication is an open-source platform that helps you build backend services without spending time on repetitive coding tasks and boilerplate code. Instead, &lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt; auto-generates a fully functional, production-ready backend based on TypeScript and Node.js.&lt;/p&gt;

&lt;p&gt;Whether you build a single service or a microservices architecture, Amplication allows you to build at any scale.&lt;/p&gt;

&lt;p&gt;With Amplication, development teams can create multiple services, manage microservices communication, use Kafka, connect to storage, or add an API Gateway.&lt;/p&gt;

&lt;p&gt;Amplication can sync the generated code with a monorepo where each service goes to a different folder or with various repositories. You can manage dozens or hundreds of services with maximum consistency from a single source of truth and centralized management and visibility.&lt;/p&gt;

</description>
      <category>node</category>
      <category>architecture</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Monoliths to Microservices using the Strangler Pattern</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Thu, 11 May 2023 09:17:05 +0000</pubDate>
      <link>https://dev.to/amplication/monoliths-to-microservices-using-the-strangler-pattern-25h2</link>
      <guid>https://dev.to/amplication/monoliths-to-microservices-using-the-strangler-pattern-25h2</guid>
      <description>&lt;p&gt;Microservices architecture has many advantages compared to monolithic architecture. Hence, many organizations start building new applications with microservices or convert existing monoliths to microservices.&lt;/p&gt;

&lt;p&gt;However, transforming a large-scale monolithic application to microservices from scratch is a challenging task. It requires significant architecture and development effort and poses a higher risk to the organization. Nevertheless, some well-established patterns and practices may help to reduce these risks.&lt;/p&gt;

&lt;p&gt;Let's look at how we can use the Strangler pattern to transform a monolith into microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do we need the Strangler Pattern?
&lt;/h2&gt;

&lt;p&gt;According to &lt;a href="https://www.statista.com/statistics/1236823/microservices-usage-per-organization-size/"&gt;Statista&lt;/a&gt;, over 80% of organizations worldwide use microservices, and 17.5% plan migrating their existing architectures to microservices. But, writing an existing monolithic application from scratch with microservices is a challenging task. As mentioned, it involves severe risks and requires significant developer effort.&lt;/p&gt;

&lt;p&gt;For example, here are some of the challenges you might face when writing an application from scratch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time-consuming&lt;/strong&gt; - Rewriting an application can take a significant time. Especially if your application is large-scale, it can take up to years. Also, there can be unexpected issues that can take a considerable amount of time to resolve.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Can't develop new features&lt;/strong&gt; - The rewriting process requires a colossal developer effort. You might have to divert the whole development team to the migration task, which will slow down the new feature development.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uncertainty&lt;/strong&gt; - You must wait to use the new system until the development process completes. Then only you can ensure that all the existing functionalities are working as expected.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Strangler Pattern minimizes these risks and uncertainties while providing a way to gradually refactor monolithic applications to microservices rather than writing from scratch.&lt;/p&gt;

&lt;p&gt;Furthermore, organizations don't have to pause new feature development or wait long to see a working application. Instead, they can choose components based on priority and refactor them to microservices while keeping monolith and microservices as a single application.&lt;/p&gt;

&lt;p&gt;Now we are beginning to understand the need for the Strangler Pattern. So, let's get into more detail on the Strangler Pattern and how it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Strangler Pattern?
&lt;/h2&gt;

&lt;p&gt;Strangler Pattern is a software design pattern used to refactor monolithic applications to microservices gradually. It helps developers to replace parts of the monolith with new and improved components while maintaining the same functionality.&lt;/p&gt;

&lt;p&gt;The Strangler Pattern uses a wrapper to integrate the microservices with the monolith. This wrapper is an integral part of this design pattern since it bridges the monolith and the microservices, directing incoming requests to the appropriate component for processing. Furthermore, it acts as a fail-safe, allowing the organization to roll back to the monolith if there are any issues with the new microservice.&lt;/p&gt;

&lt;p&gt;To get a better understanding, let's discuss how Strangler Patterns work using a real-world scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  A quick word from our sponsor
&lt;/h2&gt;

&lt;p&gt;Are you enjoying this article about the Strangler Pattern and how you can migrate from a monolith to microservices? If so, we invite you to check out &lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt;, the low-code development platform that can help you make that migration with ease. And, if you appreciate our mission to simplify app development for everyone, please consider showing your support by &lt;a href="https://github.com/amplication/amplication"&gt;giving our repo on GitHub a 🌟&lt;/a&gt;. In return we'll send you an imaginary star which you can stick on anything! Thank you for your support.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LvBu5_bM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/monoliths-to-microservices-using-the-strangler-pattern/BMO.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LvBu5_bM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/monoliths-to-microservices-using-the-strangler-pattern/BMO.webp" alt="BMO getting a gold star sticker." width="480" height="269"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h2&gt;
  
  
  How does a Strangler Pattern work?
&lt;/h2&gt;

&lt;p&gt;Refactoring a monolith into microservices with the Strangler Pattern consists of 3 main steps: &lt;strong&gt;Transform&lt;/strong&gt;, &lt;strong&gt;Coexist&lt;/strong&gt;, and &lt;strong&gt;Eliminate&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transform&lt;/strong&gt;: You need to start by identifying the main components of the monolithic application. This step involves identifying the boundaries between the existing application and the new components being developed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coexist&lt;/strong&gt;: Then, build a wrapper around the monolith to allow the new components to coexist with the existing application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Eliminate&lt;/strong&gt;: Finally, eliminate the monolith by replacing parts with new components. However, you must ensure that each microservice works as expected before integrating it into the system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, consider an e-commerce application with a monolithic architecture. The application will include user registration, product search, shopping cart, payment handling, inventory management, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9fFvJvDk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/monoliths-to-microservices-using-the-strangler-pattern/1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9fFvJvDk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/monoliths-to-microservices-using-the-strangler-pattern/1.png" alt="Transform &amp;gt; Coexist &amp;gt; Eliminate" width="800" height="193"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;To give you a better understanding, I will divide the migration process into five steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1 - Deciding the first microservice
&lt;/h3&gt;

&lt;p&gt;As the first step, you need to individually identify the monolith components with their capabilities and limitations.&lt;/p&gt;

&lt;p&gt;Then, it would be best to decide on the first microservice you will migrate. There are several factors to consider when ordering the components for refactoring, and I have explained them in detail in the next section.&lt;/p&gt;

&lt;p&gt;In this example, I have decided to use product search functionality as the first component to migrate since it is not in the application's critical path.&lt;/p&gt;

&lt;p&gt;Also, it is crucial to identify the dependencies between the selected component and others. In this example, the product search component is directly connected with inventory and shipping cart components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2 - Creating the new microservice
&lt;/h3&gt;

&lt;p&gt;As the second step, you must start creating the new microservice and move all the relevant business logic to the new microservice. Then, it would help if you created an API for the microservice, which will act as the wrapper to handle communications between the monolith and the microservice.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5oU21bD1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/monoliths-to-microservices-using-the-strangler-pattern/2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5oU21bD1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/monoliths-to-microservices-using-the-strangler-pattern/2.png" alt="Monolith to Microservice Transition" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;After implementing the microservice, you need to run both the monolith's component and the microservice in parallel to verify the functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3 - Handling the databases
&lt;/h3&gt;

&lt;p&gt;When you create a new microservice, it is necessary to create a separate database. In addition, since we are maintaining both monolith and microservice in parallel for a time, we also need to keep the primary and microservices databases in sync.&lt;/p&gt;

&lt;p&gt;For that, we can use a technique like read-through-write-through caching. In this example, we will use the microservice database as a cache until we remove the product search component from the monolith. Then, when a user searches for a product, we look in the microservices' database. We can fetch the data from the primary database if it is not there.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4o8WLx10--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/monoliths-to-microservices-using-the-strangler-pattern/3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4o8WLx10--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/monoliths-to-microservices-using-the-strangler-pattern/3.png" alt="Running both architectures at the same time." width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;h3&gt;
  
  
  Step 4 - Handling requests
&lt;/h3&gt;

&lt;p&gt;Initially, routing a small percentage of traffic to the new search service is advised to reduce the blast radius of the new microservice. For that, you can use a load balancer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5 - Test and repeat
&lt;/h3&gt;

&lt;p&gt;Once you verify the functionality of the new microservice, you can remove the component in the monolith and route all the traffic to the microservice. Then, you must repeat these steps for each component and gradually convert the monolith into microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to select the component order for refactoring?
&lt;/h2&gt;

&lt;p&gt;Another major issue developers face when using the Strangler Pattern is the component order for refactoring. Although no hard rules are defined for the selection process, it is essential to sort out what components should be migrated first to avoid unexpected delays.&lt;/p&gt;

&lt;p&gt;Here are some factors you need to consider when deciding the order of the components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Consider dependencies&lt;/strong&gt;: It is good to start with components with few dependencies, as these are likely to be easier to refactor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Start with low-risk components&lt;/strong&gt;: Starting with low-risk components will minimize the impact on the system if something goes wrong in the early stages. Also, it will help you to gain experience and build confidence in the process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business needs&lt;/strong&gt;: If there is a high-demand component and you need to scale it as soon as possible, you should start with that component to facilitate the business requirement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Components with frequent changes&lt;/strong&gt;: If components require frequent updates and deployments, refactoring them as microservices will allow you to manage separate deployment pipelines for those services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User experience&lt;/strong&gt;: Start with the components with the most negligible impact on end-users. It reduces disruptions and helps to maintain a good user experience during the transition.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrations&lt;/strong&gt;: Refactor components with minimal integration with other systems first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apart from the above, there are many other factors you can consider. However, ultimately you need to consider the most prominent factors for your project and order the components for the refactoring process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Different ways to implement the Strangler Pattern
&lt;/h2&gt;

&lt;p&gt;Unlike other design patterns, the Strangler Pattern has no language-specific libraries to help developers implement it. Instead, developers must use technologies, frameworks, and best practices to implement the Strangler Pattern.&lt;/p&gt;

&lt;p&gt;Here are some of the most used approaches in implementing the Strangler Pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Using ready-made platforms&lt;/strong&gt;: Instead of building the infrastructure and platform architecture for microservices from the ground up, you can consider utilizing ready-made platforms that mostly do the heavy lifting. E.g., &lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt;, &lt;a href="https://strapi.io/"&gt;Strapi&lt;/a&gt;, &lt;a href="https://appwrite.io/"&gt;AppWrite&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Using serverless&lt;/strong&gt;: You can use AWS Lambda or Google Cloud Functions to implement independent functions triggered by specific events and eventually replace the parts of the monolith with them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API gateways&lt;/strong&gt;: An API gateway can be the wrapper when implementing the Strangler Pattern. It provides a unified interface and can be configured to redirect requests to the appropriate component. Amazon API Gateway, Kong, and Tyk are some popular API gateways you can use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reverse proxies&lt;/strong&gt;: A reverse proxy like Nginx can also be used as the wrapper in the Strangler Pattern.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Routing and load balancing&lt;/strong&gt;: Routing and load balancing technologies can redirect traffic to the appropriate components. DNS-based routing and software-defined load balancers are popular options you can use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service discovery&lt;/strong&gt;: You can get the help of the service discovery pattern to find the locations of new microservices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;service mesh&lt;/strong&gt;: You can use technologies like Istio or Linkerd to manage the communication between the new components.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are only a subset of tools and technologies you can use to implement the Strangler Pattern. But make sure to only select a limited number of technologies from them based on your requirement. Otherwise, you will end up over-engineering the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of the Strangler Pattern
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Incremental migration&lt;/strong&gt;: Allows for an incremental migration from a monolith to microservices and reduces the risk associated with the migration process.&lt;/li&gt;
&lt;li&gt;Reduced downtime: Ensures the system remains operational throughout the migration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved resilience&lt;/strong&gt;: Improve the system's resilience by ensuring that the monolith and microservices coexist and work together seamlessly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Increased flexibility&lt;/strong&gt;: Allow the organization to choose the best technology for each part of the system, rather than being forced to use a single technology for the entire system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better maintainability&lt;/strong&gt;: Breaking down the monolith into microservices makes it easier to maintain the system over time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges of the Strangler Pattern
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Modularizing complexity&lt;/strong&gt;: Breaking a monolith into components is difficult when the functionalities are tightly coupled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data compatibility&lt;/strong&gt;: Sometimes, you may need to perform a data migration or transformation when the monolith and the microservices use different data formats.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing effort&lt;/strong&gt;: Extensive testing is required to ensure the new microservices work as expected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skill requirements&lt;/strong&gt;: Using the Strangler Pattern requires a high level of technical skill. Implementing the pattern in organizations that need more technical expertise can make it challenging.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Speedup the microservice generation
&lt;/h2&gt;

&lt;p&gt;This article discussed how the Strangler Pattern simplifies the conversion of monoliths to microservices while highlighting the advantages and challenges of the process.&lt;/p&gt;

&lt;p&gt;However, converting a monolith with hundreds of components is challenging, even with the Strangler Pattern. You need to design the underlying architecture for microservices and work on creating each service from scratch.&lt;/p&gt;

&lt;p&gt;One solution is to create a blueprint for each microservice and develop the tools to generate it. You can even consider developing a domain-driven language to standardize and reuse best practices in code and configuration. However, it also adds complexity and cost to your architecture, where you need to manage and evolve these tools in the long run.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Amplication can help
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt; can manage all the underlying complexities while simplifying the creation of microservices.&lt;/p&gt;

&lt;p&gt;Amplication supports microservices through the project hierarchy. A project groups together multiple resources used and created by Amplication, enabling support for various use cases. This simplifies the creation of connected services and makes syncing with GitHub across multiple Services much easier.&lt;/p&gt;

&lt;p&gt;You can find a getting started guide &lt;a href="https://docs.amplication.com/getting-started/"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>microservices</category>
      <category>cloud</category>
      <category>architecture</category>
    </item>
    <item>
      <title>ORM, Prisma, and How You Should Build Your Next Backend Database Project</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Tue, 25 Apr 2023 06:44:23 +0000</pubDate>
      <link>https://dev.to/amplication/orm-prisma-and-how-you-should-build-your-next-backend-database-project-hb2</link>
      <guid>https://dev.to/amplication/orm-prisma-and-how-you-should-build-your-next-backend-database-project-hb2</guid>
      <description>&lt;p&gt;Prisma is a premier object-relational mapping (ORM) tool in the Node.js ecosystem. It's powerful yet simple and easy to use. A great team also supports it, and it has enjoyed widespread adoption by the Node.js community. Prisma has earned such an excellent reputation that we incorporated it into our tech stack for Amplication. Not only do we use Prisma for our generated applications, but we also use it in our internal tech stack.&lt;/p&gt;

&lt;p&gt;In this article, we'll closely examine some of Prisma's features and the team that supports it. Prisma has become as crucial to Amplication as Node itself, and I highly recommend you consider it for your next Node project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Prisma
&lt;/h2&gt;

&lt;p&gt;Prisma is one of the most popular ORMs in the node ecosystem, and over the last two years, it has seen consistent growth in the community. Much of this has to do with the team supporting the product. Prisma is open-source and licensed under Apache-2.0, making it very flexible for your projects—an impressive array of investors, such as key members from Heroku, Netlify, and GraphQL, back it. Support for Prisma is ongoing and robust, as the team continuously makes improvements with regular releases that address bugs, performance issues, and other enhancements.&lt;/p&gt;

&lt;p&gt;One of the most important considerations when choosing a tech stack is ensuring that any application or package I use is well supported. No matter how performant or sophisticated a package is, if it's under-supported, it can add untold hours to your project as you try to fit it into an edge use case or work out a bug. Prisma's popularity and open-source status mean that not only is it supported by a fantastic internal team, but the community support is incredible, too. There are plenty of well-educated and experienced Prisma developers on the usual forums.&lt;/p&gt;

&lt;p&gt;Prisma is a package that invites new users. It is well-documented and well-suited for your first or 50th Node project. Their &lt;a href="https://www.prisma.io/docs"&gt;documentation&lt;/a&gt; contains comprehensive information on the features available. You can also access several guides, including introductory "how-tos," steps for deploying applications that use Prisma, and instructions on migrating from another ORM. The guides are thorough, up-to-date, and easy to follow. In addition, Prisma's team has made sure your experience using their ORM is as painless as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Use Prisma?
&lt;/h2&gt;

&lt;p&gt;In the simplest case, Prisma can access your database as an ORM. As part of its suite of products, Prisma offers a "Client API" that can make writing even the most complex database operations simple. But where Prisma shines is in its ability to handle complex querying operations.&lt;/p&gt;

&lt;p&gt;Prisma's API lets you easily traverse relationships. Below is an example of an application accessing a database from the &lt;a href="https://www.prisma.io/client"&gt;Prisma Client tutorial&lt;/a&gt;. First, the application accesses an author's profile by using the navigation properties from the blog post to the author and finally to the author's profile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;authorProfile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Profile&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;findUnique&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;where&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;profile&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It also makes pagination a breeze by exposing arguments for order, limits, and cursors. Below you can see an example where you can use the client to take five posts from the database by starting from the post with &lt;code&gt;id=2&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Find the next 5 posts after post id 2&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;paginatedPosts3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Post&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;post&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;findMany&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;take&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;cursor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It also allows for aggregate queries such as sum and count:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Group users by country&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;groupUsers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;groupBy&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;by&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;country&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;_count&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Along with these features, Prisma's client also facilitates transactions, includes middleware and the execution of raw questions, and helps make logging simple.&lt;/p&gt;

&lt;p&gt;But to limit Prisma's capabilities to just reading or writing data would be a major disservice. Another great aspect of Prisma is how it handles migrations. Prisma supports SDL-first and code-first approaches to modeling your data structure in code. I suggest the code-first approach to writing your database if creating a new application. Still, creating your Prisma schema using the database for existing databases can be easier.&lt;/p&gt;

&lt;p&gt;If you decide to work with the code-first approach, you can use &lt;a href="https://www.prisma.io/docs/concepts/components/prisma-migrate"&gt;Prisma Migrate&lt;/a&gt; to create the tables in your database. Start by writing your schema definition using Prisma's markup language. The following is a sample from the Prisma tutorial:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model Post {
  id        Int      @id @default(autoincrement())
  createdAt DateTime @default(now())
  updatedAt DateTime @updatedAt
  title     String   @db.VarChar(255)
  content   String?
  published Boolean  @default(false)
  author    User     @relation(fields: [authorId], references: [id])
  authorId  Int
}

model Profile {
  id     Int     @id @default(autoincrement())
  bio    String?
  user   User    @relation(fields: [userId], references: [id])
  userId Int     @unique
}

model User {
  id      Int      @id @default(autoincrement())
  email   String   @unique
  name    String?
  posts   Post[]
  profile profile?
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With your schema set up, you can prepare your database by selecting one of the providers supported by Prisma. In the example below, I'm using PostgreSQL, but there are several available options like MySQL, SQLite, MongoDB, CockroachDB, and Microsoft SQL Server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With your schema and provider set, Prisma is now able to convert your schema into executable code to create your database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- CreateTable&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"User"&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nv"&gt;"id"&lt;/span&gt; &lt;span class="nb"&gt;SERIAL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nv"&gt;"name"&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- CreateTable&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"Post"&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nv"&gt;"id"&lt;/span&gt; &lt;span class="nb"&gt;SERIAL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nv"&gt;"title"&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;150&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nv"&gt;"published"&lt;/span&gt; &lt;span class="nb"&gt;BOOLEAN&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nv"&gt;"authorId"&lt;/span&gt; &lt;span class="nb"&gt;INTEGER&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- AddForeignKey&lt;/span&gt;
&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="nv"&gt;"Post"&lt;/span&gt; &lt;span class="k"&gt;ADD&lt;/span&gt; &lt;span class="k"&gt;FOREIGN&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;"authorId"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;REFERENCES&lt;/span&gt; &lt;span class="nv"&gt;"User"&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;DELETE&lt;/span&gt; &lt;span class="k"&gt;CASCADE&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="k"&gt;UPDATE&lt;/span&gt; &lt;span class="k"&gt;CASCADE&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's just that simple. If you switch from PostgreSQL to MySQL (or any other provider), change your provider and rebuild your migration. If you need to create seed data, you can configure your application to know where your seed data is and use the Prisma client to insert any data you need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using Prisma With Amplication
&lt;/h2&gt;

&lt;p&gt;Of course, you need to be able to get the data from your database to your User. Amplication will create a &lt;code&gt;schema.prisma&lt;/code&gt; file from the configuration you inputted into the UI. This file will contain all the fields, types, and relationships necessary to create your databases. Each entity you make in the UI will also generate a series of classes. Since Amplication is built on NestJS, we generate a module, a resolver, and a service to facilitate handling queries to the API for the entity.&lt;/p&gt;

&lt;p&gt;First, the service is where we use Prisma to query or modify data in the database. We generated typed classes to represent the arguments for each query, such as CustomerFindManyArguments. Then we create wrapper methods for Prisma such as &lt;code&gt;findMany&lt;/code&gt;, which calls the customer collection's findMany function from the Prisma client and passes the instance of CustomerFinderManyArguments to it. The Prisma client uses these arguments to filter your data or use pagination.&lt;/p&gt;

&lt;p&gt;Amplication will then generate each entity's REST API controllers and GraphQL resolvers. First, we generate the typed classes; we use the same entity definition you provided to build &lt;code&gt;prisma.schema&lt;/code&gt;. This helps sync the API and database regarding required fields and types. Then we can use those typed classes to inform Swagger of the structure of the application's API. Finally, we register the modules to the application and then the application to Swagger. You'll notice that the fields in your &lt;code&gt;Args&lt;/code&gt; classes are decorated with &lt;code&gt;APIProperty&lt;/code&gt;. This gives Swagger all the details it needs to create your API documentation.&lt;/p&gt;

&lt;p&gt;A closer look at the CustomerFinderManyArguments class reveals that we also use decorators from the &lt;code&gt;@nestjs/graphql&lt;/code&gt;. By doing this, we can provide GraphQL of the available fields in your data. We then generate a resolver to select what the application will expose via GraphQL. The resolvers depend on an entity service generated with your application.&lt;/p&gt;

&lt;p&gt;Finally, Amplication will generate Swagger documentation for your API that fully represents your end-to-end data transfer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/amplication/sample-app/blob/main/apps/ecommerce-server/src/customer/base/customer.service.base.ts"&gt;You can review the &lt;code&gt;CustomerServiceBase&lt;/code&gt; in our sample application for a closer look.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M0k5I09x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/orm-prisma-and-how-you-should-build-your-next-backend-database-project/image1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M0k5I09x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/orm-prisma-and-how-you-should-build-your-next-backend-database-project/image1.png" alt="Swagger UI" width="800" height="347"&gt;&lt;/a&gt;&lt;br&gt;Figure 1: Swagger UI (Source: &lt;a href="https://docs.amplication.com/api/"&gt;Amplication&lt;/a&gt;)
  &lt;/p&gt;



&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Now you can see why Prisma has become integral to the Amplication stack. It is simple and easy to use but powerful and fully featured. The team supporting the product has done a fantastic job of enhancing it, too. As a result, Prisma will be a part of our stack for the foreseeable future, and we recommend you make it a part of yours as well.&lt;/p&gt;

&lt;p&gt;If you're excited about Amplication and its potential to simplify your development workflow, please consider showing your support by &lt;a href="https://github.com/amplication/amplication"&gt;starring our repository on GitHub&lt;/a&gt;. Your 🌟 will help us reach more developers and continue to improve the platform.&lt;/p&gt;

</description>
      <category>node</category>
      <category>nestjs</category>
      <category>database</category>
      <category>prisma</category>
    </item>
    <item>
      <title>SQL Versus NoSQL Databases: Which to Use, When, and Why</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Fri, 21 Apr 2023 08:14:29 +0000</pubDate>
      <link>https://dev.to/amplication/sql-versus-nosql-databases-which-to-use-when-and-why-2ln2</link>
      <guid>https://dev.to/amplication/sql-versus-nosql-databases-which-to-use-when-and-why-2ln2</guid>
      <description>&lt;p&gt;Choosing a suitable database for storing specific data types is vital in software development. The right choice means we can scale up an application quickly and handle increasing user requests without encountering problems. Appropriate data storage also gives us an easier time when adding new features to the application. It lets us manipulate users' data effectively and efficiently. However, choosing the correct database for your application is no mean feat, given the number of available databases.&lt;/p&gt;

&lt;p&gt;Databases can be categorized into two primary types: SQL, which is an acronym for Structured Query Language, and NoSQL, which stands for "not only SQL" or "non-SQL." In this article, we'll review the differences between SQL and NoSQL databases, examine their appropriate use cases, and look at how to work with each using Node.js, so you can confidently opt for the right one in your next project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Differences Between SQL and NoSQL Databases
&lt;/h2&gt;

&lt;p&gt;There are many differences between SQL and NoSQL databases, such as how they handle relational data, their query languages, and their supported tools for development purposes. In this section, we'll look at their most significant differences.&lt;/p&gt;

&lt;h3&gt;
  
  
  Handling Relations
&lt;/h3&gt;

&lt;p&gt;SQL databases handle data relations using tables. The tables can be interconnected using key constraints. There are three types of relations in SQL databases: one-to-one, one-to-many, and many-to-many.&lt;/p&gt;

&lt;p&gt;On the other hand, NoSQL databases do not support handling data relations. So, for example, if we want to combine data from two documents in a MongoDB database, we need to write our own code to apply the logic we want.&lt;/p&gt;

&lt;h3&gt;
  
  
  Schema vs Schemaless
&lt;/h3&gt;

&lt;p&gt;Schema is a term that describes the objects in a database, such as tables, views, relationships, and indexes. All SQL databases are schema databases. However, different SQL databases treat their schemas differently from one another. In Oracle and MySQL, there is no specific database object named schema; however, they have other database objects—like tables, views, and relationships. PostgreSQL and SQL Server have a specific database object: a schema. An SQL database can also have multiple schemas, as in the case of PostgreSQL.&lt;/p&gt;

&lt;p&gt;By contrast, NoSQL databases are schemaless. As such, NoSQL databases provide flexibility when storing user data. We don't need to keep the data structurally consistent, as in SQL databases. Taking MongoDB as an example, we can store user data in different formats so long as it is in BSON (the data type that MongoDB supports.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Querying Data
&lt;/h3&gt;

&lt;p&gt;In SQL databases, the querying languages are mostly the same. There are some differences in the syntax and how the databases execute actions based on each syntax, but they're just minor cases. For example, with PostgreSQL, let's say we want to query all the users with all columns from the user table. We can write your query like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;select&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To list all databases in Postgresql, we write:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="err"&gt;\&lt;/span&gt;&lt;span class="n"&gt;l&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With another SQL database, such as MySQL, the syntax is different when we try to list all the databases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;show&lt;/span&gt; &lt;span class="n"&gt;databases&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In NoSQL databases, the query language varies much more. Each of the NoSQL databases has its query language. For example, in MongoDB, if we want to search for all users from the users' collection, we write the query as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db.users.find({})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whereas with another NoSQL database, like Redis, if we want to retrieve the value of a key in a Redis database, we write the query as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET {key_name}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Scalability
&lt;/h3&gt;

&lt;p&gt;There are two ways to scale the database for SQL and NoSQL databases: horizontal and vertical scaling. With horizontal scaling, we add more nodes to the databases. With vertical scaling, we add more RAM and CPU to the database node.&lt;/p&gt;

&lt;h4&gt;
  
  
  Horizontal Scaling for SQL Databases
&lt;/h4&gt;

&lt;p&gt;Horizontal scaling for SQL databases is typically handled differently than for NoSQL databases. With SQL databases, like PostgreSQL, we can apply sharding or add replicas to scale the database horizontally.&lt;/p&gt;

&lt;p&gt;If our use case targets the read capability, adding replicas is a great option; PostgreSQL provides built-in functionality for scaling replicas to improve readability. However, if we are targeting both read and write capabilities, the solution is much more complicated. PostgreSQL is not designed with heavy write capability or support for distributed databases. We must apply sharding data on partitioning tables or use logical replication to achieve both write and read capabilities.&lt;/p&gt;

&lt;h4&gt;
  
  
  Horizontal Scaling for NoSQL Databases
&lt;/h4&gt;

&lt;p&gt;With NoSQL databases—say, MongoDB—we can use sharding and replica sets for horizontal scaling. Sharding is usually a preferred option for replica sets when performing horizontal scaling in MongoDB. With sharding, we copy the slices of the data from the primary node to multiple replica sets, and then the replica sets work together to join pieces of the data into a complete dataset. With replica sets, we copy the whole data from the primary node to other nodes. As a result, the write data capacity using replica sets is lowered than sharding.&lt;/p&gt;

&lt;p&gt;The differences between sharding and replica sets for horizontal scaling in MongoDB are complex and &lt;a href="https://www.geeksforgeeks.org/mongodb-replication-and-sharding/"&gt;worth further exploration&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Node.js Tooling
&lt;/h3&gt;

&lt;p&gt;In the Node.js environment, several tools offer support for SQL or NoSQL databases or, in some cases, for both. Prisma is one of the libraries that supports working for SQL and NoSQL databases. With Prisma, we can work with several databases such as PostgreSQL, MySQL, MongoDB, or SQL Server.&lt;/p&gt;

&lt;h4&gt;
  
  
  SQL: MySQL
&lt;/h4&gt;

&lt;p&gt;Let's say our SQL database has two tables: &lt;code&gt;Author&lt;/code&gt; and &lt;code&gt;Blog&lt;/code&gt;. The table Blog and Author can join using the &lt;code&gt;authorId&lt;/code&gt; key. We will define these two tables in Prisma using &lt;code&gt;schema.prisma&lt;/code&gt; file as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//schema.prisma
model Blog {
  id Int @id @default(autoincrement())
  createdAt DateTime @default(now())
  updatedAt DateTime @updatedAt
  title String @db.VarChar(255)
  content String?
  published Boolean @default(false)
  author Author @relation(fields: [authorId], references: [id])
  authorId Int
}

model Author {
  id Int @id @default(autoincrement())
  email String @unique
  name String?
  blog Blog[]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the &lt;code&gt;Blog&lt;/code&gt; model, we defined an author field related to the model &lt;code&gt;Author&lt;/code&gt; by &lt;code&gt;authorId&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To migrate the currently defined schemas with our MySQL database, we run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;npx prisma migrate dev --name init
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(Note that this command is used for development purposes only. Production database migration &lt;a href="https://www.prisma.io/docs/concepts/components/prisma-migrate/migrate-development-production"&gt;requires a different process&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;We should see similar output from the console as the one below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;The following migration(s) have been created and applied from new schema changes:

migrations/
  └─ 20230212011801_init/
    └─ migration.sql
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our database is now in sync with our schema. To add several authors to the &lt;code&gt;Author&lt;/code&gt; table, we need to write the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createMany&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
      &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;donald.le@gmail.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Donald Le&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;evans.chris@gmail.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Evan Chris&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Checking the Author table in the database, we see these two authors were added.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+----+-----------------------+------------+
| id | email | name |
+----+-----------------------+------------+
| 1 | donald.le@gmail.com | Donald Le |
| 2 | evans.chris@gmail.com | Evan Chris |
+----+-----------------------+------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To add these authors' blog posts to the database, run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;blog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createMany&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
      &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;A great blog&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;A great blog&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;authorId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Another great blog&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Another great blog&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;authorId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="p"&gt;}],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Checking the &lt;code&gt;Blog&lt;/code&gt; table, we see these two blogs were added:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+----+-------------------------+-------------------------+--------------------+--------------------+-----------+----------+
| id | createdAt | updatedAt | title | content | published | authorId |
+----+-------------------------+-------------------------+--------------------+--------------------+-----------+----------+
| 1 | 2023-02-12 01:30:35.629 | 2023-02-12 01:30:35.629 | A great blog | A great blog | 0 | 1 |
| 2 | 2023-02-12 01:30:35.629 | 2023-02-12 01:30:35.629 | Another great blog | Another great blog | 0 | 2 |
+----+-------------------------+-------------------------+--------------------+--------------------+-----------+----------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To query all blogs of the author with &lt;code&gt;authorId&lt;/code&gt; of &lt;code&gt;1&lt;/code&gt;, we write our code as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;blog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;findMany&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;where&lt;/span&gt;&lt;span class="p"&gt;:{&lt;/span&gt;&lt;span class="na"&gt;authorId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's what we see as output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;id:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;createdAt:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2023-02-12T01:30:35.629Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;updatedAt:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2023-02-12T01:30:35.629Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;title:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A great blog"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;content:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A great blog"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;published:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;authorId:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  NoSQL: MongoDB
&lt;/h4&gt;

&lt;p&gt;Let's say we want to work with a NoSQL database like MongoDB (Mongo Atlas), and we have two topics: &lt;code&gt;Blog&lt;/code&gt; and &lt;code&gt;Author&lt;/code&gt;. These topics have a relation to each other using &lt;code&gt;authorId&lt;/code&gt;. We then define these topics in &lt;code&gt;schema.prisma&lt;/code&gt; file as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//schema.prisma
model Blog {
  id String @id @default(auto()) @map("_id") @db.ObjectId
  title String
  content String
  author Author @relation(fields: [authorId], references: [id])
  authorId String @db.ObjectId
}

model Author {
  id String @id @default(auto()) @map("_id") @db.ObjectId
  email String @unique
  name String?
  address Address?
  blogs Blog[]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To apply these defined models to MongoDB, we run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;npx prisma generate 
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upon checking the Mongo Atlas database, we see two collections are created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IlLw2n6k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/sql-versus-nosql-databases-which-to-use-when-and-why/1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IlLw2n6k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/sql-versus-nosql-databases-which-to-use-when-and-why/1.png" alt="Two collections created in Mongo Atlas" width="800" height="335"&gt;&lt;/a&gt;&lt;br&gt;Figure 1: Two collections created in Mongo Atlas
  &lt;/p&gt;



&lt;p&gt;We can add a new author to the &lt;code&gt;Author&lt;/code&gt; collection and also add a new blog for that author with the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;author&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Donald Mathew&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;donald.matthew@gmail.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;blogs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;create&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;This is an interesting blog post&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Lots of really interesting stuff&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Under the &lt;code&gt;Author&lt;/code&gt; collection in Mongo Atlas, we can see that the new author has been added successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KKcuIBQi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/sql-versus-nosql-databases-which-to-use-when-and-why/2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KKcuIBQi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/sql-versus-nosql-databases-which-to-use-when-and-why/2.png" alt="A new author is added to the Author collection" width="653" height="217"&gt;&lt;/a&gt;&lt;br&gt;Figure 2: A new author is added to the &lt;code&gt;Author&lt;/code&gt; collection
  &lt;/p&gt;



&lt;p&gt;And under the &lt;code&gt;Blog&lt;/code&gt; collection, we see the new blog, too.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DdooKXCg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/sql-versus-nosql-databases-which-to-use-when-and-why/3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DdooKXCg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/sql-versus-nosql-databases-which-to-use-when-and-why/3.png" alt="A new blog is added to the Blog collection" width="653" height="217"&gt;&lt;/a&gt;&lt;br&gt;Figure 3: A new blog is added to the &lt;code&gt;Blog&lt;/code&gt; collection
  &lt;/p&gt;



&lt;p&gt;We can also query all the blogs that belong to the author &lt;code&gt;Donald Mathew&lt;/code&gt; with &lt;code&gt;authorID&lt;/code&gt; of &lt;code&gt;63e8580b1076c768fd1fe772&lt;/code&gt; by running the following code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;allUsers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;prisma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;blog&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;findMany&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;where&lt;/span&gt;&lt;span class="p"&gt;:{&lt;/span&gt;&lt;span class="na"&gt;authorId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;63e8580b1076c768fd1fe772&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;dir&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;allUsers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;depth&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running the code will show the blogs that belong to the author as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;id:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"63e8580b1076c768fd1fe773"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;title:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"This is an interesting blog post"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;content:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Lots of really interesting stuff"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="err"&gt;authorId:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"63e8580b1076c768fd1fe772"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's how we can use a Node.js library like Prisma to work with NoSQL and SQL databases. &lt;a href="https://www.prisma.io/docs"&gt;Prisma's documentation page&lt;/a&gt; gives further details about this process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing the Right Database for a Specific Use Case
&lt;/h2&gt;

&lt;p&gt;Now that we've reviewed the differences between SQL vs. NoSQL databases let's look at some specific use cases and determine the database type that would be the most appropriate fit in each instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Relational Data
&lt;/h3&gt;

&lt;p&gt;Relational data might include users' personal information, including their full name, date of birth, children, and siblings—the information that would logically be stored in a table. An SQL database would be a good choice for these cases because features like &lt;code&gt;join&lt;/code&gt; and &lt;code&gt;primary key&lt;/code&gt; help us quickly query the relational data we need. Although a NoSQL database gives us flexibility when storing data, and we can quickly scale the MongoDB database, it is not designed for dealing with relational data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flexible Data Structures
&lt;/h3&gt;

&lt;p&gt;With data that has flexible data structures, such as user-generated data, a NoSQL database would be a better choice. For example, if we wanted to research how users interact with our application and which functionalities users use most often to achieve this. In that case, we should first store all user logs in a database and then deep dive into these logs. Again, given the flexibility of the data structures from multiple functionalities in our application, a NoSQL database like MongoDB would be a good fit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Low Latency Applications
&lt;/h3&gt;

&lt;p&gt;When working with applications that require a very low latency—say, ten or twenty milliseconds, as might be expected in a trading application or an online role-play game—a NoSQL database like Redis would best serve the use case. Redis is known for its speed of reading data. However, some parts of the application that do not require such a fast server response would still benefit from using a traditional SQL database, like storing user profiles, a case in which the benefit of an SQL database's support for relational data outweighs the speed benefits of NoSQL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Applications Targeting Fraud Detection or Personalization
&lt;/h3&gt;

&lt;p&gt;For organizations and their applications that are designed to detect fraud, like &lt;a href="https://neo4j.com/case-studies/icij/"&gt;International Consortium of Investigative Journalists&lt;/a&gt;, or try to improve customer experience via personalization, as in the case of &lt;a href="https://neo4j.com/users/tourism-media/"&gt;Tourism Media&lt;/a&gt;, a NoSQL graph database like &lt;a href="https://neo4j.com/"&gt;Neo4j&lt;/a&gt; is a good match. In these kinds of use cases, the quantity of data we're dealing with is enormous, and the pattern we're searching for in the data is often complex.&lt;/p&gt;

&lt;p&gt;Neo4j addresses both of these problems effectively. Its design as a graph database means that Neo4j has a query language that provides a nested queries mechanism, which is much simpler than using join in an SQL database. This helps us to find a pattern in our vast ocean of data with relative ease. Neo4j also offers excellent performance with lightspeed throughput when working with massive quantities of data. This makes Neo4j ideal for fraud detection and personalization applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tired of Writing Boilerplate Code when Working with Databases?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://amplication.com/"&gt;Amplication&lt;/a&gt; is an application development platform that generates all necessary boilerplate code, leaving you free to focus on creating business value. In addition, Amplication has first-class support for SQL and NoSQL databases, so you can generate applications quickly using whichever is the best fit for your use case and start shipping code immediately.&lt;/p&gt;

</description>
      <category>sql</category>
      <category>postgres</category>
      <category>mongodb</category>
      <category>database</category>
    </item>
    <item>
      <title>REST vs. gRPC - What’s the Difference?</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Wed, 19 Apr 2023 16:38:14 +0000</pubDate>
      <link>https://dev.to/amplication/rest-vs-grpc-whats-the-difference-3no0</link>
      <guid>https://dev.to/amplication/rest-vs-grpc-whats-the-difference-3no0</guid>
      <description>&lt;p&gt;Most modern applications rely on APIs for clients to interact with them. This dependency makes it critical to design APIs that are efficient, scalable, and uniform in nature. Different frameworks have come into the picture to introduce structure and uniformity into API design.&lt;/p&gt;

&lt;p&gt;REST has been around for a long time and is an industry standard for developing and designing APIs. gRPC is a more recent framework introduced by Google to create fast and scalable APIs.&lt;/p&gt;

&lt;p&gt;In this article, we'll talk in detail about these frameworks and which one may be better for your use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is REST?
&lt;/h2&gt;

&lt;p&gt;REST stands for representational state transfer and is the most common architectural style used to design APIs and web-based and microservices-based applications. REST is built on the standard HTTP protocol, where each RESTful web service represents a resource. These resources can be fetched or manipulated via a common interface using HTTP standard methods—GET, POST, PUT, DELETE.&lt;/p&gt;

&lt;p&gt;REST APIs are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Client-server independent:&lt;/strong&gt; User interface (client) concerns are separated from data storage (server) concerns.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stateless:&lt;/strong&gt; communication between the client and server contains all information necessary to process the request.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cacheable:&lt;/strong&gt; REST resources are cacheable on the client or server side for improved performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;REST APIs are designed to provide a uniform interface where all components follow the same rules to interact with each other. In addition, the design is based on a layered architecture, so each component can only view the actual layer it's interacting with.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is gRPC?
&lt;/h2&gt;

&lt;p&gt;Google Remote Procedure Call, or gRPC, was created on top of the RPC protocol. The RPC protocol is an open-source, cross-platform, high-speed communication protocol built using HTTP 2.0 and is widely used for inter-service communication in distributed applications.&lt;/p&gt;

&lt;p&gt;gRPC is an extension of RPC, where a function hosted on a remote server is invoked from another remote server or machine not necessarily hosted on the same server. The HTTP 2.0 protocol allows gRPC to perform bi-directional streaming, provides built-in Transport Layer Security, and lets developers integrate services programmed in various languages.&lt;/p&gt;

&lt;p&gt;gRPC uses HTTP as the transport layer so developers can select function calls instead of predefined options. gRPC also provides loose coupling between the server and client because a long-lived connection is created between the server and client, and a new HTTP 2.0 stream is opened for every RPC call.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Fellow Coders...
&lt;/h2&gt;

&lt;p&gt;My fellow coders, just as we dared to dream of putting a man on the moon, we now have a new frontier to conquer. The Amplication repository is about to hit 10,000-stars on GitHub, and is a powerful tool that has the potential to revolutionize software development. With your support, we can take it to new heights. I call upon all of you to join this noble endeavor by &lt;a href="https://github.com/amplication/amplication"&gt;starring the Amplication repository on GitHub&lt;/a&gt;. Together, we can achieve greatness and propel the world of software development to new frontiers. Let us not shrink from the challenge, but rather embrace it with unwavering determination. Thank you, and let us boldly go where no developer has gone before!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--se1Rf9j---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/rest-vs-grpc-whats-the-difference/jfk.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--se1Rf9j---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static-assets.amplication.com/blog/rest-vs-grpc-whats-the-difference/jfk.webp" alt="Preident John F. Kennedy giving a speech." width="480" height="360"&gt;&lt;/a&gt;&lt;br&gt;Ask not what Amplication can do for you - ask what you can do for Amplication!
  &lt;/p&gt;



&lt;h2&gt;
  
  
  Comparing REST to gRPC
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Working Model
&lt;/h3&gt;

&lt;p&gt;REST is built on the HTTP 1.1 protocol and lets you define APIs based on request-response models. gRPC, on the other hand, is built on the HTTP 2.0 protocol and uses its bi-directional communication feature and the conventional response-request model. This means that in REST APIs, multiple requests from multiple clients are served sequentially, whereas, in gRPC, numerous requests are processed simultaneously, as HTTP 2.0 allows multiplexing.&lt;/p&gt;

&lt;p&gt;The working model of REST also needs built-in codes and standard regulations for when you're defining APIs. gRPC follows a model based on a predefined .proto file, including the standard data exchange guidelines that servers and clients need to follow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Communication Model
&lt;/h3&gt;

&lt;p&gt;REST and gRPC also use different messaging formats to send requests and receive responses.&lt;/p&gt;

&lt;p&gt;REST APIs generally implement JSON for receiving and returning messages between the server and the client. Because JSON is text-based and human-readable, it's more efficient and platform-agnostic than other messaging formats.&lt;/p&gt;

&lt;p&gt;gRPC uses protocol buffers for serialization and communication, just like REST uses JSON. Protocol Buffers, or Protobuf, are an efficient and highly packed messaging format for serializing data, which results in swift response delivery. In addition, Protobuf is faster when transmitting messages between systems because it performs the marshaling of message packets (packing parameters and a remote function into a binary message packet) before sending it over the network.&lt;/p&gt;

&lt;h3&gt;
  
  
  Browser Support and Latency
&lt;/h3&gt;

&lt;p&gt;The HTTP 1.1 protocol provides universal browser support, meaning REST is open for all browsers without any prerequisites. The HTTP 2.0 protocol, on the other hand, provides limited browser support, meaning gRPC is not compatible with many browsers, usually older versions. Below is a list of some popular browsers along with their versions that support the HTTP 2.0 protocol and hence gRPC as well:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
  &lt;tr&gt;
    &lt;th&gt;Browser&lt;/th&gt;
    &lt;th&gt;Unsupported Versions&lt;/th&gt;
    &lt;th&gt;Supported Versions&lt;/th&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Google Chrome&lt;/td&gt;
    &lt;td&gt;4 to 40&lt;/td&gt;
    &lt;td&gt;41 to 115&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Mozilla Firefox&lt;/td&gt;
    &lt;td&gt;2 to 35&lt;/td&gt;
    &lt;td&gt;36 to 114&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Internet Explorer&lt;/td&gt;
    &lt;td&gt;6 to 10&lt;/td&gt;
    &lt;td&gt;11 (partially supports)&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Safari&lt;/td&gt;
    &lt;td&gt;3.1 to 8&lt;/td&gt;
    &lt;td&gt;
      9 to 10.1 (partially supports)11 to 16.4 (fully supports)
    &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Microsoft Edge&lt;/td&gt;
    &lt;td&gt;All other older versions&lt;/td&gt;
    &lt;td&gt;12 to 112&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;Opera&lt;/td&gt;
    &lt;td&gt;10 to 27&lt;/td&gt;
    &lt;td&gt;28 to 95&lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Data Formats and Serialization
&lt;/h3&gt;

&lt;p&gt;REST uses multiple data formats like JSON, XML, etc. JSON is the most commonly used, as it's easy to understand and flexible, but REST generally doesn't require a rigid data structure. This flexibility makes it ideal solely for the transmission of non-critical data. Meanwhile, gRPC only supports the Protobuf message format since this allows data to be transferred more reliably.&lt;/p&gt;

&lt;p&gt;As mentioned above, Protocol Buffers compress the data for faster transmission. gRPC's use of these buffers results in agnostic serialization, whereas REST offers serialization by converting data into Python-native data supported by XML and JSON.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use gRPC over REST
&lt;/h2&gt;

&lt;p&gt;Before we get to the preferred use cases for gRPC, here are a few words on REST. REST is beneficial when an application needs numerous third-party integrations; it's also suitable for developing cloud-based applications since REST makes stateless calls that can be easily incorporated into cloud applications if any technical failure occurs. REST also supports all kinds of browsers for your application, but this is where we get to the downsides.&lt;/p&gt;

&lt;p&gt;In exchange for that excellent browser support, you get high latency with REST. gRPC supports limited browsers, but latency is never an issue. gRPC is the preferred choice when developing lightweight microservice applications because of this reduced latency and faster data transmission. Devs mostly use gRPC with microservices that require real-time message delivery, as it features a very high level of message serialization. It's especially good for connecting systems where lightweight messaging, low-power networks, and high efficiency are required.&lt;/p&gt;

&lt;p&gt;Another advantage over REST is that gRPC provides native code generation so developers can develop applications in a multilingual or language-agnostic environment.&lt;/p&gt;

&lt;p&gt;All in all, gRPC is used in applications that need multiplexed streams or for mobile applications with limited browser support.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It's tough to say which is better—REST or gRPC. But we can conclude that gRPC is most beneficial for large-scale distributed applications. However, one disadvantage of gRPC is that it's just not popular enough and has limited browser compatibility. This disadvantage means a proxy tool may be needed to process a request from the browser to the gRPC server.&lt;/p&gt;

</description>
      <category>api</category>
      <category>technical</category>
      <category>rest</category>
      <category>grpc</category>
    </item>
    <item>
      <title>Request Tracing in Node.js</title>
      <dc:creator>Yuval Hazaz</dc:creator>
      <pubDate>Thu, 13 Apr 2023 15:27:54 +0000</pubDate>
      <link>https://dev.to/amplication/request-tracing-in-nodejs-3k57</link>
      <guid>https://dev.to/amplication/request-tracing-in-nodejs-3k57</guid>
      <description>&lt;p&gt;There’s a saying that software developers like us spend 90% of our time on debugging, and only 10% of our time actually writing code. This is a bit of an exaggeration! It is true that debugging is a significant part of our work, though, especially in this era of microservices architecture. Today, it’s not unusual for us to have hundreds—even thousands—of microservices running simultaneously in our production servers.&lt;/p&gt;

&lt;p&gt;Traditionally, we rely on logs when it comes to debugging software problems. However, not all logs are helpful. They can be unspecific, for example, indicating only the error status code or showing a generic error, such as “Something went wrong.” Even if the log records a more specific error—like “User request id is invalid” with a “400” bad request error—it can still take us hours or days to figure out the root cause of the problem due to the high number of services involved. This is where request tracing comes into play.&lt;/p&gt;

&lt;p&gt;By applying request tracing in our Node.js application, we can see the flow of the problematic error represented visually. This helps us to pinpoint which services may be involved in the error, and find the root cause of the problem quickly. In this article, we’ll define request tracing, explore its importance, and look at how we can efficiently apply request tracing in our Node.js application.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Request Tracing?
&lt;/h2&gt;

&lt;p&gt;Request tracing refers to the technique of finding subrequests from the multiple services that are triggered when a single request is made. In a microservices architecture, especially in a distributed application, a service often needs to integrate with other services, databases, or third-party dependencies. As a result, when a request is executed, inner requests are triggered too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Need Request Tracing
&lt;/h2&gt;

&lt;p&gt;In order to debug an error, we need to understand how the request that generated the error is created. Logs alone are not enough; they only tell us which error type has occurred, but do not tell us the context of the issue.&lt;/p&gt;

&lt;p&gt;Traces, on the other hand, show us the requests executed inside a single parent request. We can also see the request that was made immediately before the parent's request. In short, request tracing provides us with an overview of the request process. By combining traces with logs, we can identify the root cause of the problem. Moreover, tracing reduces our debugging time since it helps to highlight the true origin of an issue in a simple and quick way by passing log error cascades.&lt;/p&gt;

&lt;h2&gt;
  
  
  An Amplication Request
&lt;/h2&gt;

&lt;p&gt;Hey developer, while you're reading about request tracing, we have a request for you... We'd love to get a &lt;a href="https://github.com/amplication/amplication" rel="noopener noreferrer"&gt;star from you on GitHub for Amplication&lt;/a&gt;. Think of it as a virtual high-five that makes our developer's hearts skip a beat. Plus, when we reach 10,000 stars, I've promised the team a pizza party. So, if you want to see some thrilled developers with tomato sauce on their faces, &lt;a href="https://github.com/amplication/amplication" rel="noopener noreferrer"&gt;give us a 🌟 on GitHub&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Please don't leave us hanging like a callback without a promise! So click that shiny star button and spread some Amplication love.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Frequest-tracing-in-nodejs%2Fgrogu.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Frequest-tracing-in-nodejs%2Fgrogu.webp" alt="Grogu eating pizza"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How Request Tracing Works
&lt;/h2&gt;

&lt;p&gt;When a user performs an action—for example, clicking the “Confirm” button to buy a shoe from an e-commerce store—several requests are executed. First, a new trace will be created after the user clicks “Confirm.” Then a parent span will be triggered. Inside the parent span, there could be an API request to check the authenticity of the user, and another API to confirm whether the user has permission to make the payment. Additional API requests could be executed, such as checking the number of shoes in stock is sufficient, or verifying that the user balance is high enough to make the payment.&lt;/p&gt;

&lt;p&gt;These requests will all be captured and included in the parent span. As a result, we can see how these requests correlate to one another, and if a request fails, we can tell what happened immediately beforehand that may have acted as a trigger.&lt;br&gt;
How to Implement Request Tracing for Node.js Applications.&lt;/p&gt;

&lt;p&gt;To implement request tracing for the Node.js application, we need to assign a unique request ID for every request in the current service so that it will forward to other services. As a result, we can use this ID to trace and visualize requests across our system architecture and identify the root cause of the failure.&lt;/p&gt;
&lt;h3&gt;
  
  
  Common Tools that Support Request Tracing
&lt;/h3&gt;

&lt;p&gt;There are a number of open source observability backend tools—like Jaeger, Zipkin, and Signoz—that support storing and visualizing request traces. Previously, two popular standards existed for the traces format: OpenTracing and OpenCensus. Tools often only supported one or another, making it a challenge to integrate software applications with the tools they need. Fortunately, OpenTracing and OpenCensus unified into OpenTelemetry in 2019, allowing software developers to implement request tracing with relative ease.&lt;/p&gt;

&lt;p&gt;OpenTelemetry is a vendor-agnostic collection of tools that can visualize traces, collect data on the time spent processing each request, and capture the requests as individual spans in traces. In order for different observability tools to visualize traces, applications need to follow OpenTelemetry formats.&lt;/p&gt;
&lt;h3&gt;
  
  
  Implement Request Tracing as a Middleware
&lt;/h3&gt;

&lt;p&gt;Let’s say we have an application that lets users create blog posts to review movies. The application will provide APIs for users to create an account, log in, and create new blog posts.&lt;br&gt;
We will use OpenTelemetry to apply request tracing for the application. The traces then will be forwarded to Jaeger. Our application will use MongoDB as a database and Express as a web server. We need to install the following dependencies in order for OpenTelemetry to attach traces to the API request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;npm install --save @opentelemetry/api
npm install --save @opentelemetry/sdk-trace-node
npm install --save opentelemetry-instrumentation-express
npm install --save @opentelemetry/instrumentation-mongodb
npm install --save @opentelemetry/instrumentation-http
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to install the Jaeger dependency to forward OpenTelemetry traces to Jaeger so that we can view the traces via the Jaeger GUI page later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;npm install --save @opentelemetry/exporter-jaeger
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To implement request tracing as a middleware, let’s create a new file in the root directory of the project, named tracing.js. Copy the following code to the file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Resource&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@opentelemetry/resources&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;SemanticResourceAttributes&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@opentelemetry/semantic-conventions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;SimpleSpanProcessor&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@opentelemetry/sdk-trace-base&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;NodeTracerProvider&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@opentelemetry/sdk-trace-node&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;trace&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@opentelemetry/api&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;//exporter&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;JaegerExporter&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@opentelemetry/exporter-jaeger&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;//instrumentations&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ExpressInstrumentation&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;opentelemetry-instrumentation-express&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;MongoDBInstrumentation&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@opentelemetry/instrumentation-mongodb&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;HttpInstrumentation&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@opentelemetry/instrumentation-http&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;registerInstrumentations&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@opentelemetry/instrumentation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;//Exporter&lt;/span&gt;
&lt;span class="nx"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;exporter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;JaegerExporter&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;http://localhost:14268/api/traces&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;NodeTracerProvider&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;resource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Resource&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;SemanticResourceAttributes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SERVICE_NAME&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addSpanProcessor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SimpleSpanProcessor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;exporter&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="nx"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;register&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nf"&gt;registerInstrumentations&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;instrumentations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;HttpInstrumentation&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ExpressInstrumentation&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MongoDBInstrumentation&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;tracerProvider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getTracer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code, we’ll register instrumentation for the application to attach traces to MongoDB requests, Express server requests, and HTTP requests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="nx"&gt;instrumentations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;HttpInstrumentation&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ExpressInstrumentation&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MongoDBInstrumentation&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The traces will be sent to the Jaeger endpoint: &lt;code&gt;http://localhost:14268/api/traces&lt;/code&gt;.&lt;br&gt;
We also need to add one line of code to the entry file of the application in order to enable OpenTelemetry to send traces to the application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;tracer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./tracing&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;blog-movie-review-service&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we start our application to check for the request traces, we need to bring up the Jaeger service to store the traces. Run the following command to trigger the Jaeger service using Docker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;docker run -d --name jaeger \
  -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
  -p 5775:5775/udp \
  -p 6831:6831/udp \
  -p 6832:6832/udp \
  -p 5778:5778 \
  -p 16686:16686 \
  -p 14250:14250 \
  -p 14268:14268 \
  -p 14269:14269 \
  -p 9411:9411 \
  jaegertracing/all-in-one:1.32
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s bring up the application, and then go to the Jaeger GUI page to see the captured traces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Frequest-tracing-in-nodejs%2F1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Frequest-tracing-in-nodejs%2F1.png" alt="Captured traces are shown in Jager GUI page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 1: Captured traces are shown in Jager GUI page&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We can see that the traces have been captured by OpenTelemetry and are being stored in Jaeger. To check the trace related to the API request for creating a new blog post, click on the first trace (Figure 1) to bring up details (Figure 2).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Frequest-tracing-in-nodejs%2F2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Frequest-tracing-in-nodejs%2F2.png" alt="Detail about the trace for success request to create a new post"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 2: Detail about the trace for success request to create a new post&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here, we can see inside the trace, there are spans for HTTP request (POST /v1/posts) and MongoDB request (mongodb.insert).&lt;br&gt;
Let’s see what a failed API request looks like when we provide an invalid authorId for the post:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Frequest-tracing-in-nodejs%2F3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstatic-assets.amplication.com%2Fblog%2Frequest-tracing-in-nodejs%2F3.png" alt="Detail about the trace for failed request to create a new post"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Figure 3: Detail about the trace for failed request to create a new post&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here, we see that the API request failed to make an HTTP request because the application is verifying the validity of the authorId, and there is no request being sent to MongoDB to insert the blog post.&lt;br&gt;
By applying request tracing to our application, we can easily debug the application and discover at which step in the API request the problem arises.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Request Tracing
&lt;/h2&gt;

&lt;p&gt;As we have seen, applying request tracing for a Node.js application enables us to investigate software issues quickly and effectively. To achieve optimal request tracing results, we should follow three core best practices when applying request tracing for our application.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Follow OpenTelemetry Standard Formats
&lt;/h3&gt;

&lt;p&gt;Following OpenTelemetry standard formats allows other tools that can integrate with OpenTelemetry (like Zipkin, Jaeger, or Prometheus) to store and visualize traces effectively. We can choose the tool that suits our specific needs, say, to achieve better visualization without applying a new trace format.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Choose a Unique String for Every Source
&lt;/h3&gt;

&lt;p&gt;Choosing a unique string for every source allows us to trace the software issue quickly based on the unique string. Let’s say we want to trace the failed API request for creating a new blog post in the demo application. From the application log, we obtained the unique trace ID used to make the API request. We can grab that unique trace ID and search Jaeger to obtain a full picture of what went wrong.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Choose an Appropriate Span Name
&lt;/h3&gt;

&lt;p&gt;Choosing a span name that matches the name of the method invoked by the span allows us to figure out what was going on in the application when the problem occurred. We should also include a custom span tag to preserve the trace. This enables us to search for the custom tag directly, instead of having to locate the trace ID and then search for it.&lt;br&gt;
For example, we can add a custom span tag for the trace related to creating a new blog post. When we need to investigate an issue related to creating blog posts, we can use the custom span tag, and all the traces for creating new movie review posts will be displayed instantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In the world of distributed software and microservice architecture, an application log alone is not enough to debug when a problem arises. To achieve observability so that we can debug software problems quickly, we need to implement request tracing in our Node.js application. OpenTelemetry’s standard tracing format makes achieving traceability easy, since we can simply make use of one of the many tools readily available to aid in the request tracing process.&lt;/p&gt;

</description>
      <category>node</category>
      <category>backend</category>
      <category>testing</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
