<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: QURBAN AHMAD</title>
    <description>The latest articles on DEV Community by QURBAN AHMAD (@qur786).</description>
    <link>https://dev.to/qur786</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/qur786"/>
    <language>en</language>
    <item>
      <title>Crowd Safety at Scale: Lessons from San Fermín and the Mahakumbh Mela Tragedy</title>
      <dc:creator>QURBAN AHMAD</dc:creator>
      <pubDate>Sun, 16 Feb 2025 07:26:29 +0000</pubDate>
      <link>https://dev.to/qur786/crowd-safety-at-scale-lessons-from-san-fermin-and-the-mahakumbh-mela-tragedy-34bi</link>
      <guid>https://dev.to/qur786/crowd-safety-at-scale-lessons-from-san-fermin-and-the-mahakumbh-mela-tragedy-34bi</guid>
      <description>&lt;p&gt;Large gatherings—whether religious festivals, concerts, or sporting events—are a testament to humanity’s ability to come together in celebration. But as we’ve seen time and again, they can also turn tragic when crowd management fails. The recent stampede at the Mahakumbh Mela in Prayagraj, Uttar Pradesh, which claimed 30 lives and left many injured, is a heartbreaking reminder of this reality.  &lt;/p&gt;

&lt;p&gt;But how can we prevent such tragedies in the future? A fascinating study published in &lt;em&gt;Nature&lt;/em&gt; (&lt;a href="https://www.nature.com/articles/s41586-024-08514-6" rel="noopener noreferrer"&gt;link to the paper&lt;/a&gt;) offers some groundbreaking insights. By analyzing crowd behavior at the San Fermín festival in Pamplona, Spain, researchers have uncovered patterns that could revolutionize how we manage large-scale events.  &lt;/p&gt;

&lt;p&gt;In this post, we’ll dive into the study’s findings, explore their implications, and discuss how technology and data-driven strategies can make large gatherings safer.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Science of Crowd Behavior: What the Study Revealed&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Researchers spent four years studying the San Fermín festival, which attracts thousands of participants for its famous “running of the bulls.” Using strategically placed cameras and sensors, they monitored crowd density and movement in a 50m x 20m plaza. Here’s what they found:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Crowd Density Matters&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At &lt;strong&gt;2-6 people per square meter&lt;/strong&gt;, crowds move in a relatively orderly fashion.
&lt;/li&gt;
&lt;li&gt;When density reaches &lt;strong&gt;6-8 people per square meter&lt;/strong&gt;, movement becomes more predictable but also more constrained.
&lt;/li&gt;
&lt;li&gt;At a critical threshold of &lt;strong&gt;9 people per square meter&lt;/strong&gt;, something extraordinary happens: the crowd begins to oscillate in a rhythmic, wave-like pattern every &lt;strong&gt;18 seconds&lt;/strong&gt;, even without any external triggers.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Why Does This Happen?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This phenomenon is an example of &lt;strong&gt;emergent behavior&lt;/strong&gt;—a situation where individuals unconsciously synchronize their actions, creating a collective pattern that isn’t directed by any single person or external factor. Here’s how it works:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At 9 people per square meter, individuals are packed so tightly that their movements are heavily constrained.
&lt;/li&gt;
&lt;li&gt;Even small shifts in position—like someone adjusting their footing or leaning slightly—create a ripple effect that propagates through the crowd.
&lt;/li&gt;
&lt;li&gt;Over time, these ripples synchronize into a rhythmic, wave-like motion, much like a pendulum swinging back and forth.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;What Does It Look Like?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Imagine standing in a tightly packed crowd. Suddenly, you feel a gentle push from behind. A few seconds later, the push comes again, and then again, like a wave. This isn’t because someone is intentionally pushing you—it’s because the entire crowd is moving in unison, almost like a fluid. These waves can travel through the crowd, creating a back-and-forth motion.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Why Is This Dangerous?&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
While this rhythmic movement might sound almost poetic, it’s actually a warning sign of potential danger. Here’s why:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Increased Pressure:&lt;/strong&gt; The wave-like motion creates pressure within the crowd. As people are pushed back and forth, the force can build up, making it hard to breathe or move.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Loss of Control:&lt;/strong&gt; Individuals lose the ability to move independently. If someone falls or stumbles, the waves can make it nearly impossible for them to get back up, increasing the risk of trampling.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk of Stampedes:&lt;/strong&gt; If the pressure becomes too great, the crowd can panic, leading to a stampede. This is especially dangerous in confined spaces where there’s no easy way to escape.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What This Means for Large Gatherings in India&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The Mahakumbh Mela, which attracts &lt;strong&gt;millions of devotees&lt;/strong&gt;, is one of the largest gatherings in the world. Managing such a massive crowd is a monumental task, and the recent stampede underscores the urgent need for better crowd management strategies.  &lt;/p&gt;

&lt;p&gt;The insights from the San Fermín study are particularly relevant here. By understanding how crowds behave at different densities, we can anticipate risks and take proactive measures to prevent disasters.  &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;How Technology Can Help: A Data-Driven Approach&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s where technology and data science come into play. By leveraging real-time monitoring, predictive modeling, and smart design, we can make large gatherings safer. Here are some actionable strategies:  &lt;/p&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Real-Time Crowd Monitoring&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High-Resolution Cameras and Sensors:&lt;/strong&gt; Deploying cameras and IoT sensors can help track crowd density and movement in real time.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-Powered Analytics:&lt;/strong&gt; Machine learning algorithms can analyze video feeds to detect anomalies, such as sudden increases in density or unusual movement patterns.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Predictive Modeling&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mathematical Models:&lt;/strong&gt; Using equations derived from fluid dynamics, we can predict how crowds will behave as density increases.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simulations:&lt;/strong&gt; Running simulations based on historical data can help organizers identify potential trouble spots and plan interventions.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Controlled Access and Flow Management&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Timed Entry:&lt;/strong&gt; Limiting the number of people entering a space at any given time can prevent overcrowding.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Barriers:&lt;/strong&gt; Deploying retractable barriers or digital signage to redirect crowds in real time can help maintain safe densities.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Public Awareness and Communication&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mobile Alerts:&lt;/strong&gt; Sending real-time alerts to attendees’ phones can inform them about crowded areas and suggest alternative routes.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear Signage:&lt;/strong&gt; Well-designed signs and announcements can guide people and reduce confusion.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Bigger Picture: Designing Safer Spaces&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The study also emphasizes the importance of &lt;strong&gt;space design&lt;/strong&gt; in crowd management. Here are some key takeaways:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Wider Pathways:&lt;/strong&gt; Ensuring that walkways are wide enough to accommodate large crowds can reduce the risk of bottlenecks.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple Exit Points:&lt;/strong&gt; Having multiple, clearly marked exits can help disperse crowds quickly in case of an emergency.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open Spaces:&lt;/strong&gt; Designing venues with open areas where crowds can spread out can prevent dangerous densities from forming.
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;A Call to Action: Collaboration and Innovation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The tragedy at the Mahakumbh Mela is a wake-up call. But it’s also an opportunity—to learn, innovate, and collaborate. By combining insights from research, the power of technology, and thoughtful design, we can create safer, more inclusive experiences for everyone.  &lt;/p&gt;

&lt;p&gt;If you’re working on crowd management solutions, data analytics tools, or IoT devices, I’d love to hear from you.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Final Thoughts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Large gatherings are a celebration of our shared humanity. But they also come with risks that we can’t afford to ignore. By understanding crowd behavior, leveraging technology, and designing smarter spaces, we can ensure that these events remain joyful and safe for all.  &lt;/p&gt;

&lt;p&gt;What are your thoughts on using technology for crowd management? Have you worked on similar projects? Let’s discuss in the comments!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>ChamaleonLLM: Dynamic Adaptation for Large Language Models During Inference</title>
      <dc:creator>QURBAN AHMAD</dc:creator>
      <pubDate>Sun, 09 Feb 2025 08:48:08 +0000</pubDate>
      <link>https://dev.to/qur786/chamaleonllm-dynamic-adaptation-for-large-language-models-during-inference-chi</link>
      <guid>https://dev.to/qur786/chamaleonllm-dynamic-adaptation-for-large-language-models-during-inference-chi</guid>
      <description>&lt;p&gt;Hey everyone! 👋 I recently came across an exciting research paper titled &lt;strong&gt;&lt;a href="https://arxiv.org/html/2502.04315v1" rel="noopener noreferrer"&gt;ChamaleonLLM: Batch-Aware Dynamic Low-Rank Adaptation via Inference-Time Clusters&lt;/a&gt;&lt;/strong&gt; by &lt;strong&gt;Kamer Ali Yuksel&lt;/strong&gt; and &lt;strong&gt;Hassan Sawaf&lt;/strong&gt; from &lt;strong&gt;aiXplain Inc.&lt;/strong&gt;, and I wanted to share my learnings with you all. The paper introduces a novel framework that enables &lt;strong&gt;dynamic adaptation&lt;/strong&gt; of large language models (LLMs) during inference, which is a game-changer for improving their flexibility and efficiency. Let’s dive into the details!&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Problem with Static LLMs&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Large language models like GPT-3 and GPT-4 have revolutionized natural language processing (NLP) with their ability to generate human-like text, summarize documents, translate languages, and more. However, these models are typically deployed with &lt;strong&gt;fixed weights&lt;/strong&gt;, meaning they cannot adapt to new or varying data during &lt;strong&gt;inference&lt;/strong&gt; (the phase where the model generates predictions). This static nature can lead to suboptimal performance when the input data differs from what the model was trained on.&lt;/p&gt;

&lt;p&gt;For example, if a model is trained on formal text but encounters informal or noisy data during inference, it may struggle to generate accurate or coherent responses. Traditional fine-tuning methods like &lt;strong&gt;Low-Rank Adaptation (LoRA)&lt;/strong&gt; help by introducing small, efficient updates to the model's weights, but these updates are still &lt;strong&gt;static&lt;/strong&gt; during inference. This is where &lt;strong&gt;ChamaleonLLM&lt;/strong&gt; comes in!&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What is ChamaleonLLM?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;ChamaleonLLM is a framework that enables &lt;strong&gt;dynamic adaptation&lt;/strong&gt; of LLMs during inference. Instead of using fixed weights or pre-learned updates, ChamaleonLLM adapts the model's behavior on-the-fly based on the &lt;strong&gt;statistics of the input batch&lt;/strong&gt;. Here’s how it works:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Key Innovations&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Batch-Aware Clustering&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Inputs in a batch are grouped into &lt;strong&gt;clusters&lt;/strong&gt; based on their &lt;strong&gt;token embeddings&lt;/strong&gt; (numerical representations of words or sentences).&lt;/li&gt;
&lt;li&gt;This clustering ensures that similar inputs are processed together, allowing the model to capture shared context and reduce noise.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Dynamic Low-Rank Updates&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;hyper-network&lt;/strong&gt; (a smaller neural network) generates &lt;strong&gt;low-rank updates&lt;/strong&gt; (small adjustments to the model's weights) tailored to the statistics of each cluster.&lt;/li&gt;
&lt;li&gt;These updates are computed in real-time, enabling the model to adapt dynamically to the specific characteristics of the input batch.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Efficiency&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unlike traditional methods that require storing multiple expert models or masks, ChamaleonLLM generates updates on-the-fly, reducing memory and computational overhead.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;How Does ChamaleonLLM Work?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The framework is built on a pre-trained causal language model (e.g., GPT-2) and consists of two main components:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Batch-Aware Clustering&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Inputs are tokenized and converted into &lt;strong&gt;token embeddings&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;These embeddings are normalized and grouped into clusters using &lt;strong&gt;k-means clustering&lt;/strong&gt;, a simple algorithm that minimizes the distance between points and cluster centroids.&lt;/li&gt;
&lt;li&gt;Each mini-batch contains inputs from the same cluster, ensuring that the model processes contextually similar data together.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Adaptive Low-Rank Update Generation&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;hyper-network&lt;/strong&gt; takes the mean token embeddings of each cluster as input and generates low-rank update parameters.&lt;/li&gt;
&lt;li&gt;These updates are applied to the model's weights, allowing it to adapt to the specific characteristics of the cluster.&lt;/li&gt;
&lt;li&gt;The hyper-network is trained to produce updates that improve the model's performance on the given batch.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why is ChamaleonLLM Better?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The authors compare ChamaleonLLM with traditional LoRA and unadapted GPT-2 models on the &lt;strong&gt;WikiText-2 dataset&lt;/strong&gt;, a benchmark for language modeling. Here are the key results:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Adaptation Regime&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Parameters&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Validation Loss&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Validation Perplexity&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Unadapted GPT-2&lt;/td&gt;
&lt;td&gt;124,439,808&lt;/td&gt;
&lt;td&gt;10.2513&lt;/td&gt;
&lt;td&gt;28,319&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Traditional LoRA&lt;/td&gt;
&lt;td&gt;204,100&lt;/td&gt;
&lt;td&gt;1.3528&lt;/td&gt;
&lt;td&gt;3.8683&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ChamaleonLLM&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;6,786,596&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.3753&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.4554&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ChamaleonLLM&lt;/strong&gt; achieves significantly lower validation loss and perplexity compared to traditional LoRA and unadapted GPT-2.&lt;/li&gt;
&lt;li&gt;The dynamic adaptation mechanism allows the model to generalize better and handle diverse input distributions.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Key Takeaways&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Adaptation&lt;/strong&gt;: ChamaleonLLM enables LLMs to adapt dynamically during inference, improving their performance on diverse and novel data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batch-Aware Clustering&lt;/strong&gt;: By grouping similar inputs, the model can capture shared context and reduce noise.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Efficiency&lt;/strong&gt;: The hyper-network generates low-rank updates on-the-fly, eliminating the need for storing multiple expert models or masks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Versatility&lt;/strong&gt;: ChamaleonLLM can adapt to a wide range of tasks and data distributions without requiring predefined task embeddings.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why This Matters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;ChamaleonLLM represents a significant step toward making LLMs more flexible and efficient in real-world applications. By enabling dynamic adaptation during inference, this framework can improve the performance of LLMs in scenarios where input data is highly variable or noisy. It also reduces the computational and memory overhead associated with traditional fine-tuning methods.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Open Source and Reproducibility&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The authors have open-sourced the code for ChamaleonLLM, ensuring that the research community can reproduce and build upon their work. You can find the code and additional details in the &lt;a href="https://arxiv.org/html/2502.04315v1" rel="noopener noreferrer"&gt;paper&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Final Thoughts&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;ChamaleonLLM is a promising framework that addresses a critical limitation of current LLMs: their inability to adapt dynamically during inference. By leveraging batch-aware clustering and dynamic low-rank updates, this approach opens up new possibilities for improving the flexibility and efficiency of language models. I’m excited to see how this research evolves and how it will be applied in real-world NLP applications!&lt;/p&gt;

&lt;p&gt;If you’re interested in learning more, I highly recommend reading the &lt;a href="https://arxiv.org/html/2502.04315v1" rel="noopener noreferrer"&gt;full paper&lt;/a&gt;. Let me know your thoughts in the comments below! 🚀&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;References&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Paper&lt;/strong&gt;: &lt;a href="https://arxiv.org/html/2502.04315v1" rel="noopener noreferrer"&gt;ChamaleonLLM: Batch-Aware Dynamic Low-Rank Adaptation via Inference-Time Clusters&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authors&lt;/strong&gt;: Kamer Ali Yuksel &amp;amp; Hassan Sawaf (aiXplain Inc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code&lt;/strong&gt;: Open-sourced for reproducibility.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>🔥 𝗧𝗵𝗲 𝟭𝟬𝘅 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿 𝗶𝘀 𝗮 𝗠𝘆𝘁𝗵 (𝗔𝗻𝗱 𝗧𝗵𝗮𝘁’𝘀 𝗮 𝗚𝗼𝗼𝗱 𝗧𝗵𝗶𝗻𝗴!)</title>
      <dc:creator>QURBAN AHMAD</dc:creator>
      <pubDate>Wed, 29 Jan 2025 14:04:40 +0000</pubDate>
      <link>https://dev.to/qur786/--1nge</link>
      <guid>https://dev.to/qur786/--1nge</guid>
      <description>&lt;p&gt;𝑊𝑒’𝑣𝑒 𝑎𝑙𝑙 ℎ𝑒𝑎𝑟𝑑 𝑜𝑓 𝑡ℎ𝑒 𝗹𝗲𝗴𝗲𝗻𝗱𝗮𝗿𝘆 “𝟭𝟬𝘅 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿”—𝑡ℎ𝑎𝑡 𝑜𝑛𝑒 𝑑𝑒𝑣𝑒𝑙𝑜𝑝𝑒𝑟 𝑤ℎ𝑜 𝑠𝑢𝑝𝑝𝑜𝑠𝑒𝑑𝑙𝑦 𝑤𝑟𝑖𝑡𝑒𝑠 𝑚𝑜𝑟𝑒 𝑐𝑜𝑑𝑒 𝑡ℎ𝑎𝑛 10 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟𝑠 𝑐𝑜𝑚𝑏𝑖𝑛𝑒𝑑, 𝑛𝑒𝑣𝑒𝑟 𝑛𝑒𝑒𝑑𝑠 ℎ𝑒𝑙𝑝, 𝑎𝑛𝑑 𝑠𝑖𝑛𝑔𝑙𝑒-ℎ𝑎𝑛𝑑𝑒𝑑𝑙𝑦 𝑏𝑢𝑖𝑙𝑑𝑠 𝑏𝑖𝑙𝑙𝑖𝑜𝑛-𝑑𝑜𝑙𝑙𝑎𝑟 𝑠𝑦𝑠𝑡𝑒𝑚𝑠 𝑤ℎ𝑖𝑙𝑒 𝑑𝑟𝑖𝑛𝑘𝑖𝑛𝑔 𝑛𝑜𝑡ℎ𝑖𝑛𝑔 𝑏𝑢𝑡 𝑐𝑜𝑓𝑓𝑒𝑒 𝑎𝑛𝑑 𝑤𝑟𝑖𝑡𝑖𝑛𝑔 𝑖𝑛 𝑉𝑖𝑚.  &lt;/p&gt;

&lt;p&gt;𝑆𝑜𝑢𝑛𝑑𝑠 𝑖𝑚𝑝𝑟𝑒𝑠𝑠𝑖𝑣𝑒, 𝑟𝑖𝑔ℎ𝑡? 𝐵𝑢𝑡 ℎ𝑒𝑟𝑒’𝑠 𝑡ℎ𝑒 ℎ𝑎𝑟𝑑 𝑡𝑟𝑢𝑡ℎ: 𝗧𝗵𝗲 𝟭𝟬𝘅 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿 𝗶𝘀 𝗮 𝗺𝘆𝘁𝗵. 𝐴𝑛𝑑 𝑤𝑜𝑟𝑠𝑒—𝑖𝑡’𝑠 𝑎 𝑑𝑎𝑛𝑔𝑒𝑟𝑜𝑢𝑠 𝑜𝑛𝑒.  &lt;/p&gt;

&lt;p&gt;𝗪𝗵𝘆 𝗖𝗵𝗮𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 “𝟭𝟬𝘅 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿” 𝗶𝘀 𝗮 𝗠𝗶𝘀𝘁𝗮𝗸𝗲&lt;/p&gt;

&lt;p&gt;🚨 𝗦𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗶𝘀 𝗮 𝘁𝗲𝗮𝗺 𝘀𝗽𝗼𝗿𝘁 – 𝑁𝑜 𝑚𝑎𝑡𝑡𝑒𝑟 ℎ𝑜𝑤 𝑠𝑘𝑖𝑙𝑙𝑒𝑑 𝑎𝑛 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟 𝑖𝑠, 𝑠𝑜𝑓𝑡𝑤𝑎𝑟𝑒 𝑑𝑒𝑣𝑒𝑙𝑜𝑝𝑚𝑒𝑛𝑡 𝑖𝑠 𝑟𝑎𝑟𝑒𝑙𝑦 𝑎 𝑠𝑜𝑙𝑜 𝑚𝑖𝑠𝑠𝑖𝑜𝑛. 𝐴 𝑔𝑟𝑒𝑎𝑡 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟 𝑖𝑠𝑛’𝑡 𝑡ℎ𝑒 𝑜𝑛𝑒 𝑤ℎ𝑜 𝑤𝑟𝑖𝑡𝑒𝑠 𝑡ℎ𝑒 𝑚𝑜𝑠𝑡 𝑐𝑜𝑑𝑒; 𝑖𝑡’𝑠 𝑡ℎ𝑒 𝑜𝑛𝑒 𝑤ℎ𝑜 𝑚𝑎𝑘𝑒𝑠 𝑡ℎ𝑒 𝘄𝗵𝗼𝗹𝗲 𝘁𝗲𝗮𝗺 𝗺𝗼𝗿𝗲 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲.  &lt;/p&gt;

&lt;p&gt;🚨 𝗦𝗰𝗮𝗹𝗶𝗻𝗴 𝗶𝘀𝗻’𝘁 𝗮𝗯𝗼𝘂𝘁 𝘀𝗽𝗲𝗲𝗱 – 𝐴 𝑠𝑜-𝑐𝑎𝑙𝑙𝑒𝑑 10𝑥 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟 𝑤ℎ𝑜 𝑟𝑒𝑓𝑢𝑠𝑒𝑠 𝑡𝑜 𝑑𝑜𝑐𝑢𝑚𝑒𝑛𝑡, 𝑐𝑜𝑙𝑙𝑎𝑏𝑜𝑟𝑎𝑡𝑒, 𝑜𝑟 𝑚𝑒𝑛𝑡𝑜𝑟 𝑜𝑡ℎ𝑒𝑟𝑠 𝑖𝑠𝑛’𝑡 𝑎𝑛 𝑎𝑠𝑠𝑒𝑡—𝑡ℎ𝑒𝑦’𝑟𝑒 𝑎 𝑏𝑜𝑡𝑡𝑙𝑒𝑛𝑒𝑐𝑘. 𝑊ℎ𝑒𝑛 𝑡ℎ𝑒𝑦 𝑙𝑒𝑎𝑣𝑒, 𝑡ℎ𝑒𝑦 𝑡𝑎𝑘𝑒 𝑡ℎ𝑒 𝑘𝑛𝑜𝑤𝑙𝑒𝑑𝑔𝑒 𝑤𝑖𝑡ℎ 𝑡ℎ𝑒𝑚, 𝑙𝑒𝑎𝑣𝑖𝑛𝑔 𝑏𝑒ℎ𝑖𝑛𝑑 𝑎 𝑚𝑒𝑠𝑠 𝑡ℎ𝑎𝑡 𝑜𝑡ℎ𝑒𝑟𝑠 𝑠𝑡𝑟𝑢𝑔𝑔𝑙𝑒 𝑡𝑜 𝑚𝑎𝑖𝑛𝑡𝑎𝑖𝑛.  &lt;/p&gt;

&lt;p&gt;🚨 𝗖𝗼𝗱𝗲 𝗶𝘀 𝗰𝗵𝗲𝗮𝗽, 𝗶𝗺𝗽𝗮𝗰𝘁 𝗶𝘀 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴 – 𝑊𝑟𝑖𝑡𝑖𝑛𝑔 𝑚𝑜𝑟𝑒 𝑐𝑜𝑑𝑒 𝑑𝑜𝑒𝑠𝑛’𝑡 𝑎𝑙𝑤𝑎𝑦𝑠 𝑚𝑒𝑎𝑛 𝑑𝑒𝑙𝑖𝑣𝑒𝑟𝑖𝑛𝑔 𝑚𝑜𝑟𝑒 𝑣𝑎𝑙𝑢𝑒. 𝑆𝑜𝑚𝑒𝑡𝑖𝑚𝑒𝑠, 𝑡ℎ𝑒 𝑏𝑒𝑠𝑡 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟𝑠 𝑤𝑟𝑖𝑡𝑒 𝑙𝑒𝑠𝑠—𝑡ℎ𝑒𝑦 𝑚𝑎𝑘𝑒 𝑏𝑒𝑡𝑡𝑒𝑟 𝑎𝑟𝑐ℎ𝑖𝑡𝑒𝑐𝑡𝑢𝑟𝑎𝑙 𝑐ℎ𝑜𝑖𝑐𝑒𝑠, 𝑒𝑙𝑖𝑚𝑖𝑛𝑎𝑡𝑒 𝑢𝑛𝑛𝑒𝑐𝑒𝑠𝑠𝑎𝑟𝑦 𝑐𝑜𝑚𝑝𝑙𝑒𝑥𝑖𝑡𝑦, 𝑎𝑛𝑑 ℎ𝑒𝑙𝑝 𝑡ℎ𝑒 𝑏𝑢𝑠𝑖𝑛𝑒𝑠𝑠 𝑚𝑜𝑣𝑒 𝑓𝑜𝑟𝑤𝑎𝑟𝑑 𝑒𝑓𝑓𝑖𝑐𝑖𝑒𝑛𝑡𝑙𝑦.  &lt;/p&gt;

&lt;p&gt;🚨 𝗧𝗵𝗲 𝗶𝗹𝗹𝘂𝘀𝗶𝗼𝗻 𝗼𝗳 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝘃𝗶𝘁𝘆 – 𝑀𝑎𝑛𝑦 𝑠𝑒𝑙𝑓-𝑝𝑟𝑜𝑐𝑙𝑎𝑖𝑚𝑒𝑑 “10𝑥 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟𝑠” 𝑜𝑝𝑡𝑖𝑚𝑖𝑧𝑒 𝑓𝑜𝑟 𝑠ℎ𝑜𝑟𝑡-𝑡𝑒𝑟𝑚 𝑠𝑝𝑒𝑒𝑑 𝑎𝑡 𝑡ℎ𝑒 𝑐𝑜𝑠𝑡 𝑜𝑓 𝑙𝑜𝑛𝑔-𝑡𝑒𝑟𝑚 𝑚𝑎𝑖𝑛𝑡𝑎𝑖𝑛𝑎𝑏𝑖𝑙𝑖𝑡𝑦. 𝑇ℎ𝑒𝑦 𝑚𝑖𝑔ℎ𝑡 𝑑𝑒𝑙𝑖𝑣𝑒𝑟 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠 𝑞𝑢𝑖𝑐𝑘𝑙𝑦, 𝑏𝑢𝑡 𝑖𝑓 𝑡ℎ𝑜𝑠𝑒 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠 𝑎𝑟𝑒 𝑓𝑢𝑙𝑙 𝑜𝑓 𝑡𝑒𝑐ℎ 𝑑𝑒𝑏𝑡, 𝑡ℎ𝑒𝑦 𝑐𝑟𝑒𝑎𝑡𝑒 𝑚𝑜𝑟𝑒 𝑤𝑜𝑟𝑘 𝑓𝑜𝑟 𝑒𝑣𝑒𝑟𝑦𝑜𝑛𝑒 𝑒𝑙𝑠𝑒.  &lt;/p&gt;

&lt;p&gt;𝑆𝑜 𝑊ℎ𝑜 𝐴𝑟𝑒 𝑡ℎ𝑒 𝑅𝑒𝑎𝑙 10𝑥 𝐸𝑛𝑔𝑖𝑛𝑒𝑒𝑟𝑠?  &lt;/p&gt;

&lt;p&gt;𝑇ℎ𝑒 𝑏𝑒𝑠𝑡 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟𝑠 𝑎𝑟𝑒𝑛’𝑡 𝑗𝑢𝑠𝑡 𝑓𝑎𝑠𝑡 𝑐𝑜𝑑𝑒𝑟𝑠—𝑡ℎ𝑒𝑦’𝑟𝑒 10𝑥 𝑝𝑟𝑜𝑏𝑙𝑒𝑚 𝑠𝑜𝑙𝑣𝑒𝑟𝑠. 𝑇ℎ𝑒𝑦:  &lt;/p&gt;

&lt;p&gt;✅ 𝐸𝑛𝑎𝑏𝑙𝑒 𝑜𝑡ℎ𝑒𝑟𝑠 – 𝑇ℎ𝑒𝑦 𝑚𝑒𝑛𝑡𝑜𝑟, 𝑠ℎ𝑎𝑟𝑒 𝑘𝑛𝑜𝑤𝑙𝑒𝑑𝑔𝑒, 𝑎𝑛𝑑 ℎ𝑒𝑙𝑝 𝑡ℎ𝑒 𝑡𝑒𝑎𝑚 𝑔𝑟𝑜𝑤.  &lt;/p&gt;

&lt;p&gt;✅ 𝑇ℎ𝑖𝑛𝑘 𝑙𝑜𝑛𝑔-𝑡𝑒𝑟𝑚 – 𝑇ℎ𝑒𝑦 𝑓𝑜𝑐𝑢𝑠 𝑜𝑛 𝑤𝑟𝑖𝑡𝑖𝑛𝑔 𝑚𝑎𝑖𝑛𝑡𝑎𝑖𝑛𝑎𝑏𝑙𝑒, 𝑠𝑐𝑎𝑙𝑎𝑏𝑙𝑒 𝑠𝑦𝑠𝑡𝑒𝑚𝑠 𝑖𝑛𝑠𝑡𝑒𝑎𝑑 𝑜𝑓 𝑟𝑢𝑠ℎ𝑖𝑛𝑔 𝑐𝑜𝑑𝑒 𝑜𝑢𝑡 𝑡ℎ𝑒 𝑑𝑜𝑜𝑟.  &lt;/p&gt;

&lt;p&gt;✅ 𝐾𝑛𝑜𝑤 𝑤ℎ𝑒𝑛 𝑡𝑜 𝑠𝑎𝑦 “𝑛𝑜” – 𝑇ℎ𝑒𝑦 𝑑𝑜𝑛’𝑡 𝑏𝑢𝑖𝑙𝑑 𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠 𝑓𝑜𝑟 𝑡ℎ𝑒 𝑠𝑎𝑘𝑒 𝑜𝑓 𝑖𝑡; 𝑡ℎ𝑒𝑦 𝑎𝑠𝑘, “𝐼𝑠 𝑡ℎ𝑖𝑠 𝑟𝑒𝑎𝑙𝑙𝑦 𝑛𝑒𝑒𝑑𝑒𝑑?”  &lt;/p&gt;

&lt;p&gt;✅ 𝐶𝑜𝑚𝑚𝑢𝑛𝑖𝑐𝑎𝑡𝑒 𝑒𝑓𝑓𝑒𝑐𝑡𝑖𝑣𝑒𝑙𝑦 – 𝑇ℎ𝑒𝑦 𝑏𝑟𝑒𝑎𝑘 𝑑𝑜𝑤𝑛 𝑐𝑜𝑚𝑝𝑙𝑒𝑥 𝑝𝑟𝑜𝑏𝑙𝑒𝑚𝑠, 𝑚𝑎𝑘𝑖𝑛𝑔 𝑖𝑡 𝑒𝑎𝑠𝑖𝑒𝑟 𝑓𝑜𝑟 𝑜𝑡ℎ𝑒𝑟𝑠 𝑡𝑜 𝑐𝑜𝑛𝑡𝑟𝑖𝑏𝑢𝑡𝑒.  &lt;/p&gt;

&lt;p&gt;𝐹𝑖𝑛𝑎𝑙 𝑇ℎ𝑜𝑢𝑔ℎ𝑡  &lt;/p&gt;

&lt;p&gt;𝑇ℎ𝑒 𝑡𝑒𝑐ℎ 𝑖𝑛𝑑𝑢𝑠𝑡𝑟𝑦 𝑛𝑒𝑒𝑑𝑠 𝑓𝑒𝑤𝑒𝑟 “10𝑥 𝑐𝑜𝑑𝑒𝑟𝑠” 𝑎𝑛𝑑 𝑚𝑜𝑟𝑒 “10𝑥 𝑡𝑒𝑎𝑚𝑚𝑎𝑡𝑒𝑠.” 𝐵𝑒𝑐𝑎𝑢𝑠𝑒 𝑎𝑡 𝑡ℎ𝑒 𝑒𝑛𝑑 𝑜𝑓 𝑡ℎ𝑒 𝑑𝑎𝑦, 𝑔𝑟𝑒𝑎𝑡 𝑠𝑜𝑓𝑡𝑤𝑎𝑟𝑒 𝑖𝑠𝑛’𝑡 𝑏𝑢𝑖𝑙𝑡 𝑏𝑦 ℎ𝑒𝑟𝑜𝑒𝑠—𝑖𝑡’𝑠 𝑏𝑢𝑖𝑙𝑡 𝑏𝑦 𝑡𝑒𝑎𝑚𝑠.  &lt;/p&gt;

&lt;p&gt;𝑊ℎ𝑎𝑡’𝑠 𝑦𝑜𝑢𝑟 𝑒𝑥𝑝𝑒𝑟𝑖𝑒𝑛𝑐𝑒? 𝐻𝑎𝑣𝑒 𝑦𝑜𝑢 𝑤𝑜𝑟𝑘𝑒𝑑 𝑤𝑖𝑡ℎ 𝑠𝑜𝑚𝑒𝑜𝑛𝑒 𝑤ℎ𝑜 𝑠𝑒𝑒𝑚𝑒𝑑 𝑙𝑖𝑘𝑒 𝑎 “10𝑥 𝑒𝑛𝑔𝑖𝑛𝑒𝑒𝑟” 𝑏𝑢𝑡 𝑎𝑐𝑡𝑢𝑎𝑙𝑙𝑦 𝑠𝑙𝑜𝑤𝑒𝑑 𝑡ℎ𝑒 𝑡𝑒𝑎𝑚 𝑑𝑜𝑤𝑛? 𝑂𝑟 𝑚𝑎𝑦𝑏𝑒 𝑠𝑜𝑚𝑒𝑜𝑛𝑒 𝑤ℎ𝑜 𝑚𝑎𝑑𝑒 𝑒𝑣𝑒𝑟𝑦𝑜𝑛𝑒 𝑎𝑟𝑜𝑢𝑛𝑑 𝑡ℎ𝑒𝑚 𝑏𝑒𝑡𝑡𝑒𝑟? 𝐿𝑒𝑡’𝑠 𝑑𝑖𝑠𝑐𝑢𝑠𝑠! 👇  &lt;/p&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>english</category>
    </item>
    <item>
      <title>Node.js Event Loop: A Comprehensive Guide</title>
      <dc:creator>QURBAN AHMAD</dc:creator>
      <pubDate>Sat, 18 Nov 2023 17:11:17 +0000</pubDate>
      <link>https://dev.to/qur786/nodejs-event-loop-a-comprehensive-guide-4719</link>
      <guid>https://dev.to/qur786/nodejs-event-loop-a-comprehensive-guide-4719</guid>
      <description>&lt;p&gt;Node.js's asynchronous nature and event-driven architecture often lead to questions about its underlying mechanism, particularly the event loop. Understanding how Node.js manages tasks and handles asynchronous operations empowers developers to write efficient and responsive applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding the Event Loop
&lt;/h3&gt;

&lt;p&gt;The event loop in Node.js enables it to do many things at once, even though JavaScript runs on a single thread. It does this by handing off tasks to the computer's core operating system, which can handle multiple tasks simultaneously. When one of these tasks finishes, Node.js gets a signal from the system, allowing it to run the associated callback with that task.&lt;/p&gt;

&lt;p&gt;When Node.js starts, it initializes the event loop, processes the provided input script which may make async API calls, schedule timers, or call process.nextTick(), then begins processing the event loop.&lt;/p&gt;

&lt;p&gt;The event loop is the heart of Node.js, managing the execution of asynchronous operations in a single-threaded environment. It operates by continuously checking a series of phases to handle different tasks efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Event Loop Phases:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Timers&lt;/strong&gt;: Handles callbacks scheduled via setTimeout() or setInterval().&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pending Callbacks&lt;/strong&gt;: Manages specific system operation callbacks deferred to the next loop iteration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Idle &amp;amp; Prepare&lt;/strong&gt;: Only used internally.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Poll&lt;/strong&gt;: Retrieve new I/O events; execute I/O related callbacks (almost all with the exception of close callbacks, the ones scheduled by timers, and setImmediate()); node will block here when appropriate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Check&lt;/strong&gt;: Executes setImmediate() callbacks after the poll phase ends.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Close Callbacks&lt;/strong&gt;: Deals with callbacks related to closed handles or sockets, like the 'close' event.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Between each loop iteration, Node.js checks for pending asynchronous I/O or timers to ensure a clean shutdown if none exist.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Timers Phase:
&lt;/h4&gt;

&lt;p&gt;With the Timers phase, we provide the threshold time after which the associated callback runs.&lt;/p&gt;

&lt;p&gt;For ex:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const fs = require('fs');

function asyncFunction(callback) {
  // Let's assume this 'fs.readFile' takes 90ms to read the whole 'abc.txt' file here.
  fs.readFile("abc.txt", callback);
}

const startTime = Date.now();

setTimeout(() =&amp;gt; {
  const setTimeoutExecutionDuration = Date.now() - startTime;
  console.log(`${setTimeoutExecutionDuration}ms has passed since I was scheduled.`);
}, 100);

asyncFunction(() =&amp;gt; {
 // Let's assume it takes 20ms
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now as we can see above, the timer that we have specified for the 'setTimeout' function is 100ms, and assuming the 'fs.readFile' called inside 'asyncFunction' takes 90ms and its callback takes 20ms to run.&lt;/p&gt;

&lt;p&gt;Even though the 'setTimeout' callback has been scheduled to be executed after 100ms it will run after 110ms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt; In the above, scenario event loop will first enter the &lt;strong&gt;Timers&lt;/strong&gt; phase will execute the &lt;strong&gt;setTimeout&lt;/strong&gt; function, and will go into the &lt;strong&gt;Poll phase&lt;/strong&gt;. In the pole phase, it will wait for other callbacks to be queued upon, as soon as fs.readFile completes its execution after 90ms, the event loop will put its callback into the poll queue and execute which takes 20ms. Now, since there are no other callbacks left, it will go into the &lt;strong&gt;Timers queue&lt;/strong&gt; and execute the &lt;strong&gt;setTimeout&lt;/strong&gt; callback. As we can see, the &lt;strong&gt;setTimeout&lt;/strong&gt; has taken total of 110ms to run instead of its actual duration which was 100ms.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Pending Phase:
&lt;/h4&gt;

&lt;p&gt;Handles specific system operation callbacks, like TCP errors.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Poll Phase:
&lt;/h4&gt;

&lt;p&gt;In the event loop, the poll phase handles I/O operations and queued events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tasks in Poll Phase:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If no timers are set, and the poll queue isn't empty, the event loop processes queued callbacks one by one until it finishes all or reaches a system-defined limit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the poll queue is empty:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If &lt;strong&gt;setImmediate()&lt;/strong&gt; scripts are scheduled, the event loop moves to the check phase to run those scheduled scripts.&lt;/li&gt;
&lt;li&gt;If no &lt;strong&gt;setImmediate()&lt;/strong&gt; scripts are scheduled, the event loop waits for new callbacks to be added to the queue and executes them immediately.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Transition after Poll Phase:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the poll queue empties, the event loop checks for timers that have reached their set thresholds.&lt;/p&gt;

&lt;p&gt;If any timers are ready, the event loop shifts back to the timers phase to execute their associated callbacks.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Check Phase:
&lt;/h4&gt;

&lt;p&gt;The check phase in the event loop runs callbacks right after the poll phase finishes its tasks. If the poll phase is inactive and there are pending scripts scheduled with &lt;strong&gt;setImmediate()&lt;/strong&gt;, the event loop moves directly to the check phase instead of waiting.&lt;/p&gt;

&lt;p&gt;Typically, as code executes, the event loop reaches the poll phase, awaiting incoming connections or requests. However, if a &lt;strong&gt;setImmediate()&lt;/strong&gt; callback is in line and the poll phase pauses, the loop swiftly transitions to the check phase without waiting for new poll events.&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Close Phase:
&lt;/h4&gt;

&lt;p&gt;If a socket or handle is closed abruptly &lt;strong&gt;(e.g. socket.destroy())&lt;/strong&gt;, the 'close' event will be emitted in this phase.&lt;/p&gt;




&lt;h4&gt;
  
  
  Conclusion
&lt;/h4&gt;

&lt;p&gt;Understanding the Node.js event loop is pivotal for writing scalable and performant applications. Leveraging its phases effectively can enhance code reliability and efficiency.&lt;/p&gt;

&lt;p&gt;By diving into practical examples and insights, we demystified the event loop, shedding light on its functionality and offering best practices for optimal code execution in Node.js.&lt;/p&gt;

&lt;p&gt;If you want to dive deeper into the intricacies of Node.js event loop, timers, and process.nextTick(), you can explore the official Node.js documentation on &lt;a href="https://nodejs.org/en/docs/guides/event-loop-timers-and-nexttick"&gt;Event Loop, Timers, and process.nextTick()&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Photo by Pixabay: &lt;a href="https://www.pexels.com/photo/ferris-wheel-in-city-315499/"&gt;https://www.pexels.com/photo/ferris-wheel-in-city-315499/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>javascript</category>
      <category>node</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>Revolutionizing React with Server Components: A Game Changer for Performance</title>
      <dc:creator>QURBAN AHMAD</dc:creator>
      <pubDate>Sat, 04 Nov 2023 07:57:09 +0000</pubDate>
      <link>https://dev.to/qur786/revolutionizing-react-with-server-components-a-game-changer-for-performance-3p30</link>
      <guid>https://dev.to/qur786/revolutionizing-react-with-server-components-a-game-changer-for-performance-3p30</guid>
      <description>&lt;p&gt;In the ever-evolving world of web development, React Server Components is a groundbreaking addition that promises to reshape how we build and optimize web applications. React Server Components tackle the limitations of client-side rendering, creating an enhanced user experience, particularly in scenarios where content frequently changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Client-Side Rendering
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The Blank Screen Blues&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Traditional React client-side rendering can be a frustrating experience for users. When an app loads, users are often greeted with a blank screen, leaving them staring into the void of nothingness.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Savior: Server-Side Rendering (SSR)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The Power of SSR&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Server-side rendering (SSR) comes to the rescue. With SSR, React generates the initial HTML of your app and sends it to the client on request. This means that even before React starts its magic, users can see content on the screen.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Data Fetching Dilemma
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Data Fetching Dilemmas&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Data fetching in React apps can be a bit of a waiting game:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In client-side rendering (CSR), data fetching typically occurs after the app has loaded. Users are left staring at loading screens until the data arrives.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In server-side rendering (SSR), data can be fetched on the server, improving loading times. However, traditional React lacks official support for data fetching within components during SSR.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Meet the Game-Changer: React Server Components
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Introducing React Server Components&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Enter the hero of our story: React Server Components. These components run directly on the server, keeping your JavaScript bundle lean and mean. But:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Unlike regular React components, React Server Components don't re-render on the client side. This means you can't use features like &lt;code&gt;useState&lt;/code&gt; and &lt;code&gt;useEffect&lt;/code&gt;, but it also means they're not hydrated on the client.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;However, to unleash their power, React Server Components need to play nice with external elements like the bundler, the server, and the router. And currently, the only way Currently is with Next.js 14.0+ and their re-architected "App Router."&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Next.js Integration&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;If you're a Next.js enthusiast, you're in for a treat. In Next.js, every component is, by default, a React Server Component. To add interactivity to a server component, you can use the "use client" directive provided by React.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Advantages
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Why You Should Be Excited&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;React Server Components bring a host of advantages to the table:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance Boost&lt;/strong&gt;: With server components, you reduce the amount of JavaScript that needs to be downloaded and the number of components that need to be hydrated. It's a performance win!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimized Bundles&lt;/strong&gt;: Say goodbye to bundling large library files if they're only used within server components. The final HTML generated on the server is sent to the client, resulting in significant reductions in your JS bundle size.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified Data Fetching&lt;/strong&gt;: Forget the hassle of async data fetching using hooks like &lt;code&gt;useEffect&lt;/code&gt;. You can now integrate data-fetching code directly into your React Server Components.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;abc-db&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;Page&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;link&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;localhost&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;root&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;password&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;link&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SELECT * FROM customers&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;h1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Customers&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/h1&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;article&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/h2&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;          &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;age&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/p&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/article&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="p"&gt;))}&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;Page&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;React Server Components marks an exciting milestone in React's evolution, enabling us to run server-exclusive code within our components. It's a game changer for performance, bundle size, and the overall developer experience. So, whether you're building the next big thing or optimizing your current app, React Server Components are here to revolutionize your web development journey. Embrace the change, and let your apps shine! 🚀🌟&lt;/p&gt;

&lt;p&gt;Photo by Christopher Farrugia: &lt;a href="https://www.pexels.com/photo/street-cafe-employees-working-at-counter-at-night-3755849/"&gt;https://www.pexels.com/photo/street-cafe-employees-working-at-counter-at-night-3755849/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>react</category>
      <category>ssr</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Understanding Source Maps: Simplifying Debugging</title>
      <dc:creator>QURBAN AHMAD</dc:creator>
      <pubDate>Wed, 12 Jul 2023 18:05:38 +0000</pubDate>
      <link>https://dev.to/qur786/understanding-source-maps-simplifying-debugging-1ikh</link>
      <guid>https://dev.to/qur786/understanding-source-maps-simplifying-debugging-1ikh</guid>
      <description>&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/man-in-gray-long-sleeve-suit-holding-a-pen-8369520/"&gt;cottonbro studio&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Today, we are talking about source maps, a crucial tool in modern web development that makes debugging significantly easier.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a Source Map?
&lt;/h3&gt;

&lt;p&gt;Definition of the &lt;strong&gt;Source Map&lt;/strong&gt; from &lt;a href="https://firefox-source-docs.mozilla.org/devtools-user/debugger/how_to/use_a_source_map/index.html"&gt;Firefox official docs&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A source map is a file that maps from the transformed source to the original source, enabling the browser to reconstruct the original source and present the reconstructed original in the debugger.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Debugging with Source Maps
&lt;/h3&gt;

&lt;p&gt;Let's dive into two common debugging scenarios and see how source maps can simplify the process.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Debugging Transpiled code&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the past, websites/web applications used to be created  with simple HTML, CSS, and Javascript. However, in the present day, we are usually building these web applications with various development tools such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;JavaScript frameworks: Angular, React, Vue, Svelte, etc.&lt;/li&gt;
&lt;li&gt;High-level programming languages: TypeScript, Flow, etc.&lt;/li&gt;
&lt;li&gt;CSS preprocessors: SCSS, LESS, PostCSS, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools then transpile our code into standard HTML, JavaScript, and CSS that browsers can understand.&lt;/p&gt;

&lt;p&gt;Without source maps, debugging the transpiled code can be cumbersome. However, with source maps, the debugging experience becomes seamless.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Debugging minified code:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Minifying is a common practice for optimizing production code. (for example, using &lt;a href="https://github.com/terser/terser"&gt;Terser&lt;/a&gt; to minify and mangle JavaScript).&lt;/p&gt;

&lt;p&gt;However, minification often obfuscates the code, making it challenging to debug. Source maps come to the rescue again by mapping the minified code back to its original, unminified form.&lt;/p&gt;

&lt;h3&gt;
  
  
  Generating source maps
&lt;/h3&gt;

&lt;p&gt;Source maps are files with names ending with .map (for example, &lt;strong&gt;example.min.js.map&lt;/strong&gt; and &lt;strong&gt;styles.css.map&lt;/strong&gt;). They can be generated by most build tools, for example, &lt;strong&gt;Typescript&lt;/strong&gt;, &lt;strong&gt;Webpack&lt;/strong&gt;, &lt;strong&gt;Rollup&lt;/strong&gt;, etc.&lt;/p&gt;

&lt;p&gt;Some tools include source maps by default, while others may need additional configuration to produce them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Demo&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here, we will be creating a web page with Typescript code transpiled into Javascript to log &lt;strong&gt;"Hello World"&lt;/strong&gt; in the browser console when a button is clicked. To generate an error, button has not been included in the HTML file.&lt;/p&gt;

&lt;p&gt;This is the Typesript configuration file (&lt;strong&gt;tsconfig.json&lt;/strong&gt;) of the project and as we can see we have enabled the source map generation option in line 6.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;The complete code of this project can be found here &lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/qur786"&gt;
        qur786
      &lt;/a&gt; / &lt;a href="https://github.com/qur786/Introduction-to-source-map"&gt;
        Introduction-to-source-map
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h3&gt;
Introduction to Source Map&lt;/h3&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/qur786/Introduction-to-source-map"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;p&gt;When we transpile Typescript file &lt;strong&gt;index.ts&lt;/strong&gt; into Javascript, besides &lt;strong&gt;index.js&lt;/strong&gt; file it also creates one more file called &lt;strong&gt;index.js.map&lt;/strong&gt;. This is a source map that a browser can use to recreate &lt;strong&gt;index.ts&lt;/strong&gt; file from &lt;strong&gt;index.js&lt;/strong&gt;.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;When we render the webpage in the browser using a server (&lt;a href="https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer"&gt;Live Server&lt;/a&gt; VSCode extension has been used here to server the &lt;strong&gt;index.html&lt;/strong&gt; file in localhost), we see an error in the browser console, it contains the stack trace of the error. As we can see it is pointing at &lt;strong&gt;index.ts&lt;/strong&gt; even though the browser in no way has access to our &lt;strong&gt;index.ts&lt;/strong&gt; file or using it in any way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mcB_oCtE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7sly8k2e6sbcip0g96iz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mcB_oCtE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7sly8k2e6sbcip0g96iz.png" alt="Console tab of a browser showing stack trace of an error" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we check the browser &lt;strong&gt;Sources&lt;/strong&gt; tab, we find we have both our Typescript and Javascript files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ofXl8RHV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/89vql3blsm168uj2j1pp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ofXl8RHV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/89vql3blsm168uj2j1pp.png" alt="Source tab of a browser showing TS code" width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thus, from here we can also debug the code using our source Typescript code via the browser debugger. This may not seem to have a huge impact on debugging since the file transpiled here is very short. But eventually, for a larger and more complex project, it can ease a developer's job of debugging.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion:
&lt;/h3&gt;

&lt;p&gt;Generating a source map for your transpiled/minified code allows you to simplify debugging and trace errors back to the original source. Incorporate source maps into your development workflow to enhance your debugging experience and streamline the error-tracing process.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>sourcemap</category>
      <category>debugging</category>
      <category>typescript</category>
    </item>
  </channel>
</rss>
