<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: SagarTrimukhe</title>
    <description>The latest articles on DEV Community by SagarTrimukhe (@sagartrimukhe).</description>
    <link>https://dev.to/sagartrimukhe</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sagartrimukhe"/>
    <language>en</language>
    <item>
      <title>Zero downtime, lossless migration of records to ElasticSearch indexes</title>
      <dc:creator>SagarTrimukhe</dc:creator>
      <pubDate>Mon, 26 Jan 2026 07:59:41 +0000</pubDate>
      <link>https://dev.to/sagartrimukhe/zero-downtime-lossless-migration-of-records-to-elasticsearch-indexes-1end</link>
      <guid>https://dev.to/sagartrimukhe/zero-downtime-lossless-migration-of-records-to-elasticsearch-indexes-1end</guid>
      <description>&lt;p&gt;Migrating nearly 2TB of audit log data to new Elasticsearch (ES) indexes on an AWS OpenSearch cluster is no small feat—especially when the system is live, handling thousands of new logs every minute, and strict consistency is non-negotiable. Here’s how we tackled this challenge, ensuring zero downtime, no data loss, and minimal impact on other services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Migrate?
&lt;/h3&gt;

&lt;p&gt;Our existing ES schema was no longer meeting the needs of our growing, scalable requirements. Query performance was lagging, and new use cases demanded a more flexible schema. We needed to redesign our ES indexes to support faster retrieval and richer queries, all while keeping response times low.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnegjdcple64jpkp21rd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnegjdcple64jpkp21rd.png" alt="Data copying from source to target index"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Live System:&lt;/strong&gt; The audit log service is mission-critical, with other services depending on it for workflow completion. Downtime was not an option.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Throughput:&lt;/strong&gt; Thousands of logs are ingested every minute. Missing even a single log could mislead customers, as logs track everything from device additions to user logins.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shared ES Cluster:&lt;/strong&gt; Multiple services use the same AWS OpenSearch (Elasticsearch-compatible) cluster, so heavy migration scripts could not be allowed to degrade API latencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Large Data Volume:&lt;/strong&gt; Each month had its own ES index, with four indexes active at any time. We needed to migrate all data from each to new indexes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Data Loss, No Duplicates:&lt;/strong&gt; Consistency was paramount. We could not afford to lose or duplicate any audit logs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Planning the Migration
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Understanding the Scale
&lt;/h4&gt;

&lt;p&gt;We started by gathering production-scale data: index sizes, average and maximum latencies, average record size and CPU, memory and latency tolerance thresholds. Since we’d be copying (not moving) data, we also ensured we had enough resources to hold the duplicated data during migration.&lt;/p&gt;

&lt;p&gt;To ensure a smooth OpenSearch data migration without overwhelming the cluster, it is important to operate within safe resource limits. Keep CPU utilization of each data node below 70% and maintain JVM memory pressure below 80% throughout the migration process.&lt;/p&gt;

&lt;p&gt;When planning the migration indexing rate, also account for existing indexing load, search traffic, and search latency. New indexing throughput should be introduced gradually, based on calculated cluster headroom, assuming a near-linear impact since indexing is both CPU and I/O bound.&lt;/p&gt;

&lt;p&gt;The safe migration indexing rate can be estimated using the following model:&lt;/p&gt;

&lt;p&gt;CPU safe threshold (CPU_safe): 70% (matches the target data-node ceiling)&lt;/p&gt;

&lt;p&gt;Current CPU usage (CPU_now): measured as x&lt;/p&gt;

&lt;p&gt;JVM safe threshold (JVM_safe): 80% (keeps pressure under the GC risk zone)&lt;/p&gt;

&lt;p&gt;Current JVM pressure (JVM_now): observed JVM memory pressure&lt;/p&gt;

&lt;p&gt;JVM growth factor (JVM_growth per 1K): ≈1.2% additional JVM pressure per extra 1,000 docs/sec indexed&lt;/p&gt;

&lt;p&gt;On the storage side, although gp3 EBS volumes support 250 MB/sec throughput per node, indexing can consume disk bandwidth. A conservative estimate is:&lt;/p&gt;

&lt;p&gt;40 MB/sec disk throughput per 1,000 docs/sec of indexing load&lt;/p&gt;

&lt;p&gt;To maintain additional safety, reserve a 30% buffer to absorb unpredictable load increases and avoid resource saturation (multiply by 0.7).&lt;/p&gt;

&lt;p&gt;Safe Migration Indexing Formula&lt;br&gt;


&lt;/p&gt;
&lt;div class="katex-element"&gt;
  &lt;span class="katex-display"&gt;&lt;span class="katex"&gt;&lt;span class="katex-mathml"&gt;Rsafe=Rcurrent×min⁡(CPUsafeCPUnow,JVMsafe−JVMnowJVMgrowth/1K)×0.7
R_{safe} = R_{current} \times \min\left(\frac{CPU_{safe}}{CPU_{now}}, \frac{JVM_{safe} - JVM_{now}}{JVM_{growth/1K}}\right) \times 0.7
&lt;/span&gt;&lt;span class="katex-html"&gt;&lt;span class="base"&gt;&lt;span class="strut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;R&lt;/span&gt;&lt;span class="msupsub"&gt;&lt;span class="vlist-t vlist-t2"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="sizing reset-size6 size3 mtight"&gt;&lt;span class="mord mtight"&gt;&lt;span class="mord mathnormal mtight"&gt;s&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;a&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;f&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;e&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-s"&gt;​&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mrel"&gt;=&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="base"&gt;&lt;span class="strut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;R&lt;/span&gt;&lt;span class="msupsub"&gt;&lt;span class="vlist-t vlist-t2"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="sizing reset-size6 size3 mtight"&gt;&lt;span class="mord mtight"&gt;&lt;span class="mord mathnormal mtight"&gt;c&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;u&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;rre&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;n&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;t&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-s"&gt;​&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mbin"&gt;×&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="base"&gt;&lt;span class="strut"&gt;&lt;/span&gt;&lt;span class="mop"&gt;min&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="minner"&gt;&lt;span class="mopen delimcenter"&gt;&lt;span class="delimsizing size3"&gt;(&lt;/span&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mopen nulldelimiter"&gt;&lt;/span&gt;&lt;span class="mfrac"&gt;&lt;span class="vlist-t vlist-t2"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;CP&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;U&lt;/span&gt;&lt;span class="msupsub"&gt;&lt;span class="vlist-t vlist-t2"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="sizing reset-size6 size3 mtight"&gt;&lt;span class="mord mtight"&gt;&lt;span class="mord mathnormal mtight"&gt;n&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;o&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;w&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-s"&gt;​&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="frac-line"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;CP&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;U&lt;/span&gt;&lt;span class="msupsub"&gt;&lt;span class="vlist-t vlist-t2"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="sizing reset-size6 size3 mtight"&gt;&lt;span class="mord mtight"&gt;&lt;span class="mord mathnormal mtight"&gt;s&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;a&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;f&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;e&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-s"&gt;​&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-s"&gt;​&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mclose nulldelimiter"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mpunct"&gt;,&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mopen nulldelimiter"&gt;&lt;/span&gt;&lt;span class="mfrac"&gt;&lt;span class="vlist-t vlist-t2"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;J&lt;/span&gt;&lt;span class="mord mathnormal"&gt;V&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;M&lt;/span&gt;&lt;span class="msupsub"&gt;&lt;span class="vlist-t vlist-t2"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="sizing reset-size6 size3 mtight"&gt;&lt;span class="mord mtight"&gt;&lt;span class="mord mathnormal mtight"&gt;g&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;ro&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;wt&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;h&lt;/span&gt;&lt;span class="mord mtight"&gt;/1&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;K&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-s"&gt;​&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="frac-line"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;J&lt;/span&gt;&lt;span class="mord mathnormal"&gt;V&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;M&lt;/span&gt;&lt;span class="msupsub"&gt;&lt;span class="vlist-t vlist-t2"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="sizing reset-size6 size3 mtight"&gt;&lt;span class="mord mtight"&gt;&lt;span class="mord mathnormal mtight"&gt;s&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;a&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;f&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;e&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-s"&gt;​&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mbin"&gt;−&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mord mathnormal"&gt;J&lt;/span&gt;&lt;span class="mord mathnormal"&gt;V&lt;/span&gt;&lt;span class="mord"&gt;&lt;span class="mord mathnormal"&gt;M&lt;/span&gt;&lt;span class="msupsub"&gt;&lt;span class="vlist-t vlist-t2"&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;span class="pstrut"&gt;&lt;/span&gt;&lt;span class="sizing reset-size6 size3 mtight"&gt;&lt;span class="mord mtight"&gt;&lt;span class="mord mathnormal mtight"&gt;n&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;o&lt;/span&gt;&lt;span class="mord mathnormal mtight"&gt;w&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-s"&gt;​&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-s"&gt;​&lt;/span&gt;&lt;/span&gt;&lt;span class="vlist-r"&gt;&lt;span class="vlist"&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mclose nulldelimiter"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mclose delimcenter"&gt;&lt;span class="delimsizing size3"&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;span class="mbin"&gt;×&lt;/span&gt;&lt;span class="mspace"&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="base"&gt;&lt;span class="strut"&gt;&lt;/span&gt;&lt;span class="mord"&gt;0.7&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;
&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Practical Example&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the current cluster handles 20,000 records per minute (RPM) and both CPU and JVM show 50% available headroom, you can safely plan to migrate an additional 10,000 records per minute without overloading the system (effective rate ≈30,000 RPM, ≈43 million records per day). This keeps:&lt;/p&gt;

&lt;p&gt;CPU remains under 55–60%&lt;/p&gt;

&lt;p&gt;JVM pressure stays below 80–82%&lt;/p&gt;

&lt;p&gt;Disk throughput remains well within safe limits.&lt;/p&gt;

&lt;p&gt;This approach ensures stable search performance and a reliable migration process without compromising cluster health.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Efficient Data Reading
&lt;/h4&gt;

&lt;p&gt;Reading data from ES indexes had to be done serially. Parallel reads would add too much query overhead. We considered migrating customer-by-customer, but with 400k+ accounts and the need to fetch account IDs from another microservice, this approach was too complex and would require excessive checkpoint management.&lt;/p&gt;

&lt;p&gt;Instead, we used the ES scroll API (search_after/point-in-time works similarly), which allowed us to read data in serial batches using a cursor value. This cursor was stored in Redis as a checkpoint for each month’s index.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Designing for Reliability
&lt;/h4&gt;

&lt;p&gt;Given the massive data size, number of records and resource constraints, migration would take time. We needed a process that could be paused or stopped at any point, and resumed from the last checkpoint in case of failure—without starting over. For this, we used Redis to store migration checkpoints, with a 7-day TTL for safety.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Checkpointing Strategy: When to Save the checkpoint in Redis?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of writing the ES cursor to Redis every time after reading data from ES, we decided to write it at 5-minute intervals. This approach kept the write load low on Redis. Since our Redis cluster is also a shared infrastructure, we wanted to avoid any spikes or unnecessary load on Redis during the migration process. A restart could replay up to ~50k docs (5 minutes × 10k/min), which was acceptable because writes were idempotent (same document IDs) and bulk replays were safe for downstream services.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Deployment Strategy
&lt;/h4&gt;

&lt;p&gt;The migration script was containerized and deployed as a Kubernetes Job. On startup, it checked Redis for an existing checkpoint. If none was found, it started a fresh migration. Depending on the system load we spanned multiple job instances, each handling a different month’s index. The script was designed to take the month as a parameter, allowing multiple instances to run concurrently without conflict.&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Batch Processing and Parallel Writes
&lt;/h4&gt;

&lt;p&gt;Each ES call fetched 10,000 records. We split these into ten batches of 1,000 records each, processing them in parallel. Each batch was converted to the new schema, sometimes requiring calls to other internal services for additional data. Once processed, each batch was written back to ES using bulk writes.&lt;/p&gt;

&lt;p&gt;We could have increased parallelism, but since each record conversion involved external service calls, we prioritized system stability over speed. The migration was intentionally slow to avoid impacting customers or other services. Bulk writers honored backpressure—if CPU/JVM/disk exceeded thresholds or bulk responses returned rejections, the job applied exponential backoff before resuming.&lt;/p&gt;

&lt;h4&gt;
  
  
  6. Handling Duplicates and Consistency
&lt;/h4&gt;

&lt;p&gt;A key risk was partial batch processing: if the migration was interrupted after fetching 10,000 records but before all were written, restarting from the last checkpoint could result in duplicate writes. To prevent this, we used the original ES document ID as the ID in the new index. Elasticsearch overwrites documents with the same ID, ensuring no duplicates.&lt;/p&gt;

&lt;h3&gt;
  
  
  Results
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;600 Million Logs Migrated:&lt;/strong&gt; Over four months, we processed about 600 million audit logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;20 Million Records/Day:&lt;/strong&gt; On average, 20 million records were migrated daily.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero Downtime, No Data Loss:&lt;/strong&gt; The migration was seamless, with no impact on customers or dependent services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once all the data was copied to the index in the updated schema, only then did we enable the system to read from the new indexes. This ensured there was no impact to existing system features during the migration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd9uo4a7yc8y31r69x3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnd9uo4a7yc8y31r69x3e.png" alt="Switching from old to new index post migration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As data is continuously flowing into the system for the current month index, we had another module that was writing to the new index continuously, alongside writing to the existing old index. There was no issue with older month indexes, as data cannot be added to those indexes. The system is write- or append-only—no edits or deletes are allowed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmdgexgqo917tb2ftqxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmdgexgqo917tb2ftqxy.png" alt="Dual write"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaways
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Checkpoints are Critical:&lt;/strong&gt; Storing ES cursor values in Redis allowed safe, resumable migrations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batching and Parallelism:&lt;/strong&gt; Careful batching and controlled parallelism balanced speed and system stability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Idempotency:&lt;/strong&gt; Using document IDs to prevent duplicates was essential for data integrity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Preparation Pays Off:&lt;/strong&gt; Understanding production data and system constraints upfront made all the difference.&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>elasticsearch</category>
      <category>cloud</category>
      <category>designpatterns</category>
    </item>
    <item>
      <title>Progress bar to show content read/remaining.</title>
      <dc:creator>SagarTrimukhe</dc:creator>
      <pubDate>Sun, 13 Nov 2022 15:37:10 +0000</pubDate>
      <link>https://dev.to/sagartrimukhe/progress-bar-to-show-content-readremaining-28e9</link>
      <guid>https://dev.to/sagartrimukhe/progress-bar-to-show-content-readremaining-28e9</guid>
      <description>&lt;p&gt;I found &lt;a href="https://flexiple.com/react/introduction-to-higher-order-components-in-react-by-example/"&gt;this&lt;/a&gt; website where a progress bar is shown which helps users to know how much content they have read or is still remaining.&lt;/p&gt;

&lt;p&gt;I tried to recreate that in react and it is easy and involves a bit of maths!&lt;/p&gt;

&lt;h2&gt;
  
  
  How to get the percentage?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c5sy281u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/idaa4c1adamm8822ducu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c5sy281u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/idaa4c1adamm8822ducu.png" alt="Mathematical explanation" width="880" height="698"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Simple maths :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Code:
&lt;/h2&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  Demo:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WlsDIRii--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwmuszmgkap27uisc9q4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WlsDIRii--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gwmuszmgkap27uisc9q4.gif" alt="Progress_bar_demo" width="500" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>css</category>
      <category>html</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Generate YAML files in Golang.</title>
      <dc:creator>SagarTrimukhe</dc:creator>
      <pubDate>Wed, 25 Aug 2021 14:55:36 +0000</pubDate>
      <link>https://dev.to/sagartrimukhe/generate-yaml-files-in-golang-29h1</link>
      <guid>https://dev.to/sagartrimukhe/generate-yaml-files-in-golang-29h1</guid>
      <description>&lt;p&gt;This is post is about converting go struct/map into a yaml using this amazing go package &lt;a href="https://github.com/go-yaml/yaml"&gt;go-yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will be using &lt;a href="https://pkg.go.dev/gopkg.in/yaml.v2#Marshal"&gt;yaml.Marshal&lt;/a&gt; method to convert a struct into yaml.&lt;/p&gt;

&lt;p&gt;Each of the examples will have complete code, so that users can copy paste and quickly run and experiment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 1: Very basic example of converting a struct to yaml.
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "fmt"
    "gopkg.in/yaml.v2"
)

type Student struct {
    Name string
    Age  int
}

func main() {
    s1 := Student{
        Name: "Sagar",
        Age:  23,
    }

    yamlData, err := yaml.Marshal(&amp;amp;s1)

    if err != nil {
        fmt.Printf("Error while Marshaling. %v", err)
    }

    fmt.Println(" --- YAML ---")
    fmt.Println(string(yamlData))  // yamlData will be in bytes. So converting it to string.
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS D:\Programming Languages\Go\src\yaml_conversion&amp;gt; go run .\main.go
 --- YAML ---
name: Sagar
age: 23 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Example 2: Providing custom tags.
&lt;/h3&gt;

&lt;p&gt;Let's say we want to have different key names in output yaml. Then we can do that by adding tags. This very similar to JSON tags.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "fmt"
    "gopkg.in/yaml.v2"
)

type Student struct {
    Name string `yaml:"student-name"`
    Age  int    `yaml:"student-age"`
}

func main() {
    s1 := Student{
        Name: "Sagar",
        Age:  23,
    }

    yamlData, err := yaml.Marshal(&amp;amp;s1)
    if err != nil {
        fmt.Printf("Error while Marshaling. %v", err)
    }

    fmt.Println(" --- YAML with custom tags---")
    fmt.Println(string(yamlData))
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS D:\Programming Languages\Go\src\yaml_conversion&amp;gt; go run .\main.go
 --- YAML with custom tags---
student-name: Sagar
student-age: 23
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Tip: Make sure there is no gap between yaml: and "tag". (Wasted 30 mins of my time to figure out why tags are not coming in output)&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 3: Structure with Arrays and Maps.
&lt;/h3&gt;

&lt;p&gt;Here I have used another structure to store marks. A simple map also works.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
    "fmt"
    "gopkg.in/yaml.v2"
)

type MarksStruct struct {
    Sceince     int8 `yaml:"science"`
    Mathematics int8 `yaml:"mathematics"`
    English     int8 `yaml:"english"`
}

type Student struct {
    Name   string      `yaml:"student-name"`
    Age    int8        `yaml:"student-age"`
    Marks  MarksStruct `yaml:"subject-marks"`
    Sports []string
}

func main() {
    s1 := Student{
        Name: "Sagar",
        Age:  23,
        Marks: MarksStruct{
            Sceince:     95,
            Mathematics: 90,
            English:     90,
        },
        Sports: []string{"Cricket", "Football"},
    }

    yamlData, err := yaml.Marshal(&amp;amp;s1)

    if err != nil {
        fmt.Printf("Error while Marshaling. %v", err)
    }

    fmt.Println(" --- YAML with maps and arrays ---")
    fmt.Println(string(yamlData))
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PS D:\Programming Languages\Go\src\yaml_conversion&amp;gt; go run .\main.go
 --- YAML with maps and arrays ---
student-name: Sagar
student-age: 23
subject-marks:
  science: 95
  mathematics: 90
  english: 90
sports:
- Cricket
- Football
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Example 4: Sometimes we want to actually create a YAML file.
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main
import (
    "fmt"
    "io/ioutil"

    "gopkg.in/yaml.v2"
)

type MarksStruct struct {
    Sceince     int8 `yaml:"science"`
    Mathematics int8 `yaml:"mathematics"`
    English     int8 `yaml:"english"`
}

type Student struct {
    Name   string      `yaml:"student-name"`
    Age    int8        `yaml:"student-age"`
    Marks  MarksStruct `yaml:"subject-marks"`
    Sports []string
}

func main() {
    s1 := Student{
        Name: "Sagar",
        Age:  23,
        Marks: MarksStruct{
            Sceince:     95,
            Mathematics: 90,
            English:     90,
        },
        Sports: []string{"Cricket", "Football"},
    }

    yamlData, err := yaml.Marshal(&amp;amp;s1)

    if err != nil {
        fmt.Printf("Error while Marshaling. %v", err)
    }

    fileName := "test.yaml"
    err = ioutil.WriteFile(fileName, yamlData, 0644)
    if err != nil {
        panic("Unable to write data into the file")
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output will be a file named test.yaml file.&lt;/p&gt;

&lt;p&gt;Thank you for reading. Hope this helps.&lt;/p&gt;

</description>
      <category>go</category>
      <category>yaml</category>
    </item>
    <item>
      <title>Complete guide to deploy a simple full stack application in Docker</title>
      <dc:creator>SagarTrimukhe</dc:creator>
      <pubDate>Thu, 05 Aug 2021 17:03:48 +0000</pubDate>
      <link>https://dev.to/sagartrimukhe/complete-guide-to-deploy-a-simple-full-stack-application-in-docker-4lk6</link>
      <guid>https://dev.to/sagartrimukhe/complete-guide-to-deploy-a-simple-full-stack-application-in-docker-4lk6</guid>
      <description>&lt;h2&gt;
  
  
  Table of contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a simple todo UI using React.&lt;/li&gt;
&lt;li&gt;Create a simple backend server using Express.&lt;/li&gt;
&lt;li&gt;Connect Frontend and Backend.&lt;/li&gt;
&lt;li&gt;Create UI bundle and serve it through Express server.&lt;/li&gt;
&lt;li&gt;Run the application in Docker&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating a simple TODO app using React. &lt;a&gt; &lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;We will be using create-react-app to quickly setup a react application with basic dependencies installed.&lt;/p&gt;

&lt;p&gt;Command to create the app&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx create-react-app frontend
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This will create folder named frontend containing all the basic files with dependencies installed.&lt;/p&gt;

&lt;p&gt;Two more dependencies are required:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;axios : To make API calls. fetch call also be used.&lt;/li&gt;
&lt;li&gt;uuid : To generate random IDs for todo tasks.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;commands to install the above&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install --save axios
npm install --save uuid
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Below is the simple UI code which has &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A text input box to take the task name.&lt;/li&gt;
&lt;li&gt;A "Add" button to add new tasks.&lt;/li&gt;
&lt;li&gt;List of previously created tasks with "Delete" button to delete the tasks.&lt;/li&gt;
&lt;/ol&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;This is how it will look (No fancy colors or animations)&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbs3gfg9r6i74gsitylrb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbs3gfg9r6i74gsitylrb.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;API handlers are maintained in a separate file.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Creating a server using Express.js &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Let's start with a folder creation named backend.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;cd into the folder&lt;br&gt;
cd backend&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run "npm init" command to create a package.json file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When the above command is ran, it will ask for few details. All can be skipped by hitting enter.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run "npm install --save express" to install the express js dependency. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By default the entry point of the application will be pointing to "index.js". It can be changed by editing the package.json file&lt;br&gt;
"main": "your_file_name.js"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a file index.js (or your_file_name.js)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Following is the simple express js code with 3 APIs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;It has &lt;br&gt;
a. GET /tasks endpoint to get the list of tasks.&lt;br&gt;
b. POST /tasks to create a new task.&lt;br&gt;
c. DELETE /tasks to delete a task.&lt;/p&gt;

&lt;p&gt;All the tasks are stored in-memory. The tasks data will be lost as soon the server is stopped.&lt;br&gt;
(So, NOT exactly a FULL STACK application)&lt;/p&gt;

&lt;p&gt;To start the server run following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node index.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You can test APIs using a REST Client like &lt;a href="https://learning.postman.com/docs/getting-started/introduction/" rel="noopener noreferrer"&gt;Postman&lt;/a&gt; or simple CURL commands.&lt;/p&gt;
&lt;h3&gt;
  
  
  Connecting frontend and backend. &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Even though the correct endpoints are mentioned in UI, it will not be able to reach the backend APIs due to &lt;a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS" rel="noopener noreferrer"&gt;CORS&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To solve this we need to use a proxy.&lt;br&gt;
It is very simple to proxy the calls by just updating the UI package.json file.&lt;/p&gt;

&lt;p&gt;Add the below configuration.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; "proxy": "http://localhost:4000"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now the UI should be able to connect to backend APIs.&lt;/p&gt;
&lt;h3&gt;
  
  
  Generating the UI bundle and serving it through express. &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Generating production UI bundle is dead simple&lt;br&gt;
Run the below command.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is will create a folder named build in your frontend root directory containing index.html file along with JavaScript and CSS files.&lt;/p&gt;

&lt;p&gt;This can be served using a static server like "serve" package.&lt;/p&gt;

&lt;p&gt;BUT, the UI will not be able to reach backend servers.&lt;br&gt;
BECAUSE, proxy feature is available only in development mode.&lt;/p&gt;

&lt;p&gt;To solve this issue, we can serve the UI using same express server. YES you read it right. a single server to server both frontend and backend.&lt;/p&gt;

&lt;p&gt;To do so, &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Copy the build folder to backend folder&lt;/li&gt;
&lt;li&gt;Add the following line in index.js file
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.use(express.static('build'));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;We can give the absolute path also, just keeping it simple here :)&lt;/p&gt;

&lt;p&gt;Now start the express server. At '/' path the UI will be served and at other paths, the APIs will be available.&lt;/p&gt;
&lt;h3&gt;
  
  
  Deploying the application in a container. &lt;a&gt;&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Till now we have developed and deployed the application on local machine.&lt;/p&gt;

&lt;p&gt;If you are a docker guy, then we can quickly deploy this application in a container as well.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a Dockerfile.
Dockerfile is a simple text file containing all the commands to create a docker image.
The following is a docker file which uses alpine OS as a base image and exposes the port 4000.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a docker image
Run the command to build the image
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker image build -t todoapp:1.0 .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Start the container
Once the image is created, next step is to create a container.
Run the command to create and start the container.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker container run -d -p 8000:4000 todoapp:1.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;If the docker is running on a VM then the application can be accessed at VM-IP-Address:8000
eg. &lt;a href="http://192.168.43.18:8000" rel="noopener noreferrer"&gt;http://192.168.43.18:8000&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Complete project is available at: &lt;a href="https://github.com/SagarTrimukhe/todo-app" rel="noopener noreferrer"&gt;https://github.com/SagarTrimukhe/todo-app&lt;/a&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>docker</category>
      <category>node</category>
      <category>express</category>
    </item>
    <item>
      <title>No Internet Connection wrapper for React apps</title>
      <dc:creator>SagarTrimukhe</dc:creator>
      <pubDate>Fri, 14 May 2021 07:36:26 +0000</pubDate>
      <link>https://dev.to/sagartrimukhe/no-internet-connection-wrapper-for-react-apps-5dl8</link>
      <guid>https://dev.to/sagartrimukhe/no-internet-connection-wrapper-for-react-apps-5dl8</guid>
      <description>&lt;p&gt;Imagine, &lt;br&gt;
We have a web application that heavily depends on the backend server for information (eg. records in a table) and that information needs to be constantly updated. We might think to use some polling mechanism.&lt;/p&gt;

&lt;p&gt;But if the data received from the server is directly stored in a React state variable and if the user loses the internet connection then there are chances of updating the state with empty data.&lt;/p&gt;

&lt;p&gt;So, instead of showing empty data, we can show a message, something like "No internet connection." &lt;/p&gt;

&lt;h3&gt;
  
  
  How can we do that?
&lt;/h3&gt;

&lt;p&gt;We can write a wrapper component and wrap the entry-level component. So whenever the internet connection is lost, a custom page/message can be shown.&lt;/p&gt;

&lt;p&gt;Here I have used the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/NavigatorOnLine/onLine" rel="noopener noreferrer"&gt;navigator.onLine&lt;/a&gt; API to get the network status.&lt;/p&gt;

&lt;p&gt;enough story, show me the code :)&lt;/p&gt;

&lt;h3&gt;
  
  
  Main component
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import './App.css';
import NoInternetConnection from './NoInternetConnection'

function App() {
  return (
    &amp;lt;div className="App"&amp;gt;
        &amp;lt;NoInternetConnection&amp;gt;
        &amp;lt;h1&amp;gt;My first post on DEV!!&amp;lt;/h1&amp;gt;
        &amp;lt;/NoInternetConnection&amp;gt;
    &amp;lt;/div&amp;gt;
  );
}

export default App;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Wrapper component
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import React, {useState, useEffect} from 'react';

const NoInternetConnection = (props) =&amp;gt; {
    // state variable holds the state of the internet connection
    const [isOnline, setOnline] = useState(true);

    // On initization set the isOnline state.
    useEffect(()=&amp;gt;{
        setOnline(navigator.onLine)
    },[])

    // event listeners to update the state 
    window.addEventListener('online', () =&amp;gt; {
        setOnline(true)
    });

    window.addEventListener('offline', () =&amp;gt; {
        setOnline(false)
    });

    // if user is online, return the child component else return a custom component
    if(isOnline){
    return(
        props.children
    )
    } else {
        return(&amp;lt;h1&amp;gt;No Interner Connection. Please try again later.&amp;lt;/h1&amp;gt;)
    }
}

export default NoInternetConnection;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here is the demo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuh6vclamb8kg32f8nmte.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuh6vclamb8kg32f8nmte.gif" alt="ezgif.com-gif-maker"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it. By the way, this is my first post on DEV (damn! on the internet :)). Feedback is appreciated.  &lt;/p&gt;

</description>
      <category>javascript</category>
      <category>react</category>
    </item>
  </channel>
</rss>
