<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Guneet Nagia</title>
    <description>The latest articles on DEV Community by Guneet Nagia (@guneet_08).</description>
    <link>https://dev.to/guneet_08</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/guneet_08"/>
    <language>en</language>
    <item>
      <title>Real-Time Data Streaming with Spring WebFlux and SSE</title>
      <dc:creator>Guneet Nagia</dc:creator>
      <pubDate>Sat, 22 Feb 2025 12:50:56 +0000</pubDate>
      <link>https://dev.to/guneet_08/real-time-data-streaming-with-spring-webflux-and-sse-1obk</link>
      <guid>https://dev.to/guneet_08/real-time-data-streaming-with-spring-webflux-and-sse-1obk</guid>
      <description>&lt;p&gt;I recently built a real-time data streaming solution using Spring WebFlux and Server-Sent Events (SSE) to push short bursts of data to customers via an API. Here’s why it worked and how it stacks up.&lt;/p&gt;

&lt;p&gt;I needed a lightweight, secure way to deliver short-duration updates. SSE won over alternatives:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vs. Kafka/Pulsar&lt;/strong&gt;: Too heavy for my needs—SSE is simpler, no broker required.&lt;br&gt;
&lt;strong&gt;Vs. Long Polling&lt;/strong&gt;: Less efficient—SSE uses one connection, not constant requests.&lt;br&gt;
&lt;strong&gt;Vs. WebSockets&lt;/strong&gt;: Overkill for one-way data—SSE is simpler and secure by design.&lt;/p&gt;

&lt;p&gt;SSE shines because it’s:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lightweight&lt;/strong&gt;: Runs on HTTP, low overhead.&lt;br&gt;
&lt;strong&gt;Great for Short Bursts&lt;/strong&gt;: Perfect for event-driven updates.&lt;br&gt;
&lt;strong&gt;Simple&lt;/strong&gt;: Easy to set up and use.&lt;br&gt;
&lt;strong&gt;Secure&lt;/strong&gt;: One-way flow plus custom logic (e.g., auth, filtering).&lt;br&gt;
&lt;strong&gt;Scalable&lt;/strong&gt;: Handles concurrency well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Spring WebFlux Boost
&lt;/h2&gt;

&lt;p&gt;Spring WebFlux made it real-time and scalable:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reactive&lt;/strong&gt;: Non-blocking, ideal for streaming.&lt;br&gt;
&lt;strong&gt;Scalable&lt;/strong&gt;: Manages tons of connections effortlessly.&lt;br&gt;
&lt;strong&gt;Flexible&lt;/strong&gt;: Easy to add auth and filtering.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;Customers hit an API, get authenticated, and receive filtered, real-time data over SSE. It’s fast, secure, and scales with WebFlux’s reactive power.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Use It
&lt;/h2&gt;

&lt;p&gt;SSE is best for one-way, short-burst updates. Need two-way? Try WebSockets. Big data pipelines? Kafka or Pulsar. For my case, SSE nailed it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaway
&lt;/h2&gt;

&lt;p&gt;Spring WebFlux + SSE = a clean, efficient streaming solution. Try it if you need simple, real-time data delivery.&lt;/p&gt;

&lt;p&gt;Thoughts? Share below!&lt;/p&gt;

</description>
      <category>webflux</category>
      <category>sse</category>
      <category>realtime</category>
    </item>
    <item>
      <title>Seamless Database Migration in a Microservices Architecture: Cloud-Native, Event-Driven Strategies</title>
      <dc:creator>Guneet Nagia</dc:creator>
      <pubDate>Sat, 22 Feb 2025 12:23:36 +0000</pubDate>
      <link>https://dev.to/guneet_08/seamless-database-migration-in-a-microservices-architecture-cloud-native-event-driven-strategies-2gm0</link>
      <guid>https://dev.to/guneet_08/seamless-database-migration-in-a-microservices-architecture-cloud-native-event-driven-strategies-2gm0</guid>
      <description>&lt;p&gt;Downtime during a database migration can tank a production app—trust me, I’ve been there. As a solution design, I tackled this head-on, designing a cloud-native, event-driven solution using Kafka, AWS, and microservices. Stick around to see how I replaced DynamoDB with MongoDB without breaking a sweat.&lt;/p&gt;

&lt;p&gt;Here’s how I tackled this at scale: a phased, zero-downtime migration using AWS, Kafka, and event-driven microservices. Check out this diagram for the flow:&lt;/p&gt;

&lt;p&gt;The approach introduces an abstraction layer between services and databases, leveraging Kafka for event-driven reliability. Here’s how it works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9leatad4psd3z9qipcx4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9leatad4psd3z9qipcx4.png" alt="Image description" width="741" height="1046"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 1: Current State
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Data Flow&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Writer Service writes data to Amazon DynamoDB.&lt;/li&gt;
&lt;li&gt;Reader Services fetch data from DynamoDB.&lt;/li&gt;
&lt;li&gt;Archival Service moves older data to an Amazon S3 Bucket for storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Retention Policy&lt;/strong&gt;: Data in DynamoDB is retained for 15 days before archival.&lt;/p&gt;

&lt;h2&gt;
  
  
  Phase 2: Transition (Dual Write &amp;amp; Validation)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Data Replication&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduce a Kafka Sink Connector to sync data from Writer Service to MongoDB.&lt;/li&gt;
&lt;li&gt;Continue writing to DynamoDB while also storing data in MongoDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;New Archival Process&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A new S3 bucket is created to store data from MongoDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;New Service Introduction&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A New Service API is introduced to provide the same responses as the existing database.&lt;/li&gt;
&lt;li&gt;A toggle flag is implemented in the API to switch between DynamoDB and MongoDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Validation Steps&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate responses from New Service against existing Reader Services.&lt;/li&gt;
&lt;li&gt;Ensure consistency in both S3 archival processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Phase 3: Full Migration &amp;amp; Decommissioning
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Switching Data Source&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change the toggle flag to make Reader Services fetch data from MongoDB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enhancements &amp;amp; Optimization&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fully transition reading services to the New Service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Decommissioning Old Infrastructure&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decommission DynamoDB and remove dependency on the old archival process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Final Validation &amp;amp; Completion&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate that all data is successfully migrated and all services function as expected.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cloud</category>
      <category>microservices</category>
      <category>eventdriven</category>
      <category>kafka</category>
    </item>
  </channel>
</rss>
