<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Goutham</title>
    <description>The latest articles on DEV Community by Goutham (@gouthamsayee).</description>
    <link>https://dev.to/gouthamsayee</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gouthamsayee"/>
    <language>en</language>
    <item>
      <title>When Frontend loves Backend : BFF Pattern Explained</title>
      <dc:creator>Goutham</dc:creator>
      <pubDate>Sat, 13 Apr 2024 21:43:11 +0000</pubDate>
      <link>https://dev.to/gouthamsayee/when-frontend-loves-backend-bff-pattern-explained-2fl5</link>
      <guid>https://dev.to/gouthamsayee/when-frontend-loves-backend-bff-pattern-explained-2fl5</guid>
      <description>&lt;p&gt;As platforms evolve from monolithic architectures to distributed systems, the variety and number of clients—ranging from mobile and web to various API consumers—also increase. With this diversification, the one-size-fits-all API contract of traditional microservices becomes less effective. Each client has unique requirements and faces distinct challenges when rendering their user interfaces, necessitating a more tailored approach.&lt;/p&gt;

&lt;h3&gt;
  
  
  BFF - Backend For Frontend
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;Consider a scenario involving a hypothetical version of Shopify, an e-commerce giant, which we'll use as our example to illustrate this concept. As "Shopify" grows, it is accessed by an increasing number of mobile and web clients. These clients interact with backend microservices for fetching product catalog details, order management, and payment processing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bbhbvu0jakrq2a25f02.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bbhbvu0jakrq2a25f02.gif" alt="Before BFF" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When retrieving product catalog details, both mobile and web clients currently use the same API provided by the catalog service. While the core information required by each client remains the same, significant challenges arise due to their differing needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Mobile App Requirements&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Speed&lt;/strong&gt;: Essential for user satisfaction, requiring quick load times.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Efficiency&lt;/strong&gt;: Crucial for users on limited data plans, necessitating smaller image URLs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: Important to minimize on-device computation to save battery life.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Web Application Requirements&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rich Data&lt;/strong&gt;: Users on larger screens need more detailed information and higher resolution images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex Features&lt;/strong&gt;: Capabilities like additional product recommendations and interactive elements to enhance user engagement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Utilization&lt;/strong&gt;: Desktops can handle more data and complex features without the constraints of mobile devices.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To address these distinct needs effectively, the BFF (Backend for Frontend) pattern comes into play, creating specialized backend services tailored specifically for each type of client.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web Catalog Service&lt;/strong&gt;: Delivers rich content and interactive experiences suitable for desktop browsing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mobile Catalog Service&lt;/strong&gt;: Ensures speed and efficiency for mobile users.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzn2y4093uhbrq4se5k4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzn2y4093uhbrq4se5k4.gif" alt="After BFF" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Considering API Design Alternatives
&lt;/h3&gt;

&lt;p&gt;Instead of two separate backend services, another approach might involve creating two distinct sets of APIs within the same service that cater to different frontends. This method would still align with the BFF philosophy by tailoring API responses to the needs of each client type, though it would maintain a single service architecture. This could simplify deployment and maintenance but might require more sophisticated internal logic to manage the different API behaviors effectively.&lt;/p&gt;

&lt;p&gt;Both approaches aim to optimize the user experience by addressing specific frontend needs, leveraging the BFF pattern's flexibility to enhance service delivery.&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>backenddevelopment</category>
      <category>cloudcomputing</category>
      <category>frontend</category>
    </item>
    <item>
      <title>Hands-On Guide: Implementing Debezium for PostgreSQL to Kafka Integration</title>
      <dc:creator>Goutham</dc:creator>
      <pubDate>Sat, 23 Mar 2024 20:57:57 +0000</pubDate>
      <link>https://dev.to/gouthamsayee/debezium-implementation-in-postgres-2fjd</link>
      <guid>https://dev.to/gouthamsayee/debezium-implementation-in-postgres-2fjd</guid>
      <description>&lt;p&gt;In Part 1, we explored the workings of Debezium and its integration process. This post will guide us through implementing the Debezium engine, enabling it to connect to a PostgreSQL database and export the required records to Kafka.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let's solve below usecase
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;A small online bookstore wants to notify its customers in real-time when books they are interested in become available or when their stock levels are low, encouraging them to make a purchase decision.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5fdxsgukkd5akk4s37h.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5fdxsgukkd5akk4s37h.gif" alt="usecase" width="800" height="258"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For this experiment , lets implement a solution which can create a channel for us to notify user's  in real time for any new books inserted to book table&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  We can implement this end to end solution in 3 simple steps
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Install Postgres and Kafka modules using Docker.&lt;/li&gt;
&lt;li&gt;Configuring necessary permission for Debezium User.&lt;/li&gt;
&lt;li&gt;Developing the Embedded Debezium engine (using spring boot).&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Below Docker compose file let us to spin the tech stack&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install Postgres ( &lt;em&gt;our active Database&lt;/em&gt; )&lt;/li&gt;
&lt;li&gt;Install Kafka , Zookeeper ( &lt;em&gt;Debezium uses to export the DB records&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;Kafkadrop ( &lt;em&gt;to see the data in action&lt;/em&gt; )
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'

services:
  # PostgreSQL service
  postgres:
    image: postgres:latest
    container_name: my-postgres-container
    environment:
      POSTGRES_DB: book_store
      POSTGRES_USER: myuser
      POSTGRES_PASSWORD: mypassword
    ports:
      - "5432:5432"
    volumes:
      - ./postgres-data:/var/lib/postgresql/data
      - ./custom-postgresql.conf:/etc/postgresql/postgresql.conf
    command: ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]

  # Apache Kafka service using Confluent version
  kafka:
    image: confluentinc/cp-kafka:6.2.1
    container_name: my-kafka-container
    environment:
      KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
    ports:
      - "9092:9092"
    volumes:
      - ./kafka-data:/var/lib/kafka/data
    depends_on:
      - zookeeper

  # Zookeeper service required for Kafka
  zookeeper:
    image: wurstmeister/zookeeper:3.4.6
    container_name: my-zookeeper-container
    ports:
      - "2181:2181"

  # kafdrop service required for kafka ui
  kafdrop:
    image: obsidiandynamics/kafdrop:latest
    container_name: my-kafdrop-container
    environment:
      KAFKA_BROKERCONNECT: kafka:9093
      JVM_OPTS: "-Xms32M -Xmx64M"
    ports:
      - "9000:9000"
    depends_on:
      - kafka

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;save the docker file and use docker compose to run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose -f debezium.yaml up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l2s6xw461zugivmhpea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l2s6xw461zugivmhpea.png" alt="Run docker" width="800" height="184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensure that PostgreSQL is configured with wal_level=logical to enable logical replication. This setting informs PostgreSQL that logical replication is needed, allowing Debezium to interpret the WAL (Write-Ahead Logging) log accurately and extract detailed information about the affected rows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3erpafu1zonh7afxw1gg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3erpafu1zonh7afxw1gg.png" alt="Wallevel" width="458" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Configuring postgres publication to be use by debezium&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;create publication bookstore_replication for table book_store.book_inventory ;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In this step we create a simple Spring Boot application.&lt;/li&gt;
&lt;li&gt;Configure this app to subscribe to changes from our book_store inventory table.&lt;/li&gt;
&lt;li&gt;We will read all the DML operations from the DB and publish them as events to kafka.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Embedded Engine
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;pom.xml&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="UTF-8"?&amp;gt;
&amp;lt;project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"&amp;gt;
    &amp;lt;modelVersion&amp;gt;4.0.0&amp;lt;/modelVersion&amp;gt;
    &amp;lt;parent&amp;gt;
        &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
        &amp;lt;artifactId&amp;gt;spring-boot-starter-parent&amp;lt;/artifactId&amp;gt;
        &amp;lt;version&amp;gt;3.2.3&amp;lt;/version&amp;gt;
        &amp;lt;relativePath/&amp;gt; &amp;lt;!-- lookup parent from repository --&amp;gt;

    &amp;lt;/parent&amp;gt;
    &amp;lt;groupId&amp;gt;com.tech.debezium&amp;lt;/groupId&amp;gt;
    &amp;lt;artifactId&amp;gt;embeddedDebeziumEngine&amp;lt;/artifactId&amp;gt;
    &amp;lt;version&amp;gt;0.0.1-SNAPSHOT&amp;lt;/version&amp;gt;
    &amp;lt;name&amp;gt;embeddedDebeziumEngine&amp;lt;/name&amp;gt;
    &amp;lt;description&amp;gt;EmbeddedDebeziumEngine&amp;lt;/description&amp;gt;


    &amp;lt;properties&amp;gt;
        &amp;lt;version.debezium&amp;gt;2.5.2.Final&amp;lt;/version.debezium&amp;gt;
    &amp;lt;/properties&amp;gt;


    &amp;lt;dependencies&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;!-- https://mvnrepository.com/artifact/org.springframework/spring-web --&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-web&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;6.1.4&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;!-- For Maven --&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-tomcat&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-test&amp;lt;/artifactId&amp;gt;
            &amp;lt;scope&amp;gt;test&amp;lt;/scope&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;io.debezium&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;debezium-api&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;${version.debezium}&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;io.debezium&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;debezium-core&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;${version.debezium}&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;io.debezium&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;debezium-embedded&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;${version.debezium}&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.apache.kafka&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;kafka-clients&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;

        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;io.debezium&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;debezium-connector-postgres&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;${version.debezium}&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;!-- https://mvnrepository.com/artifact/org.apache.kafka/connect-api --&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.apache.kafka&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;connect-api&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;3.5.1&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.projectlombok&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;lombok&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;1.18.22&amp;lt;/version&amp;gt; &amp;lt;!-- Replace with the latest version --&amp;gt;
            &amp;lt;scope&amp;gt;provided&amp;lt;/scope&amp;gt;
        &amp;lt;/dependency&amp;gt;

        &amp;lt;!-- https://mvnrepository.com/artifact/io.confluent/kafka-connect-storage-core --&amp;gt;


    &amp;lt;/dependencies&amp;gt;

    &amp;lt;build&amp;gt;
        &amp;lt;plugins&amp;gt;
            &amp;lt;plugin&amp;gt;
                &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
                &amp;lt;artifactId&amp;gt;spring-boot-maven-plugin&amp;lt;/artifactId&amp;gt;
            &amp;lt;/plugin&amp;gt;
        &amp;lt;/plugins&amp;gt;
    &amp;lt;/build&amp;gt;

&amp;lt;/project&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Spring Boot Application Class&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public static void main(String[] args) {
        SpringApplication.run(EmbeddedDebeziumEngineApplication.class, args);
        Properties properties = new Properties();
        properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());

        Configuration postgresDebeziumConfig = io.debezium.config.Configuration.create()
                .with("name", "postgres-inventory-connector")
                .with("bootstrap.servers","localhost:9092")
                .with("connector.class", "io.debezium.connector.postgresql.PostgresConnector")
                .with("offset.storage", "org.apache.kafka.connect.storage.KafkaOffsetBackingStore")
                .with("offset.storage.topic", "debezium_bookstore_lsn")
                .with("offset.storage.partitions", "1")
                .with("offset.storage.replication.factor", "1")
                .with("offset.flush.interval.ms","6000")
                .with("database.hostname", "localhost")
                .with("database.port", "5432")
                .with("database.user", "myuser")
                .with("database.password", "mypassword")
                .with("database.dbname", "book_store")
                .with("topic.prefix", "book_store")
                .with("table.include.list", "book_store.book_inventory")
                .with("slot.name","bookstore_replication")
                .with("plugin.name","pgoutput")
                .with("snapshot.mode","initial")
                .build();

        PostgresEventHandler changeEventProcessor = new PostgresEventHandler(properties);
        debeziumEngine = DebeziumEngine.create(ChangeEventFormat.of(Connect.class))
                .using(postgresDebeziumConfig.asProperties())
                .notifying(changeEventProcessor::handleChangeEvent)
                .build();

        executorService = Executors.newSingleThreadExecutor();
        executorService.execute(debeziumEngine);
        // Start the Debezium engine
        debeziumEngine.run();
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is my handler class&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.tech.debezium.embeddedDebeziumEngine.handler;

import com.fasterxml.jackson.databind.ObjectMapper;
import com.tech.debezium.embeddedDebeziumEngine.model.InventoryEvent;
import io.debezium.data.Envelope;
import io.debezium.engine.RecordChangeEvent;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.connect.data.Struct;
import org.apache.kafka.connect.source.SourceRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.util.Optional;
import java.util.Properties;

import static io.debezium.data.Envelope.FieldName.OPERATION;


public class PostgresEventHandler {

    private final KafkaProducer&amp;lt;String, String&amp;gt; kafkaProducer;
    private static final String TOPIC_NAME = "bookstore_inventory_stream";
    private static final String ERROR_MESSAGE = "Exception occurred during event handling";
    private static final Logger logger = LoggerFactory.getLogger(PostgresEventHandler.class);
    private final ObjectMapper objectMapper = new ObjectMapper();
    public PostgresEventHandler(Properties kafkaProperties) {
        this.kafkaProducer = new KafkaProducer&amp;lt;&amp;gt;(kafkaProperties);
    }


    public void handleChangeEvent(RecordChangeEvent&amp;lt;SourceRecord&amp;gt; sourceRecordRecordChangeEvent) {
        SourceRecord sourceRecord = sourceRecordRecordChangeEvent.record();
        Struct sourceRecordChangeValue = (Struct) sourceRecord.value();

        if (sourceRecordChangeValue != null) {
            try {
                Envelope.Operation operation = Envelope.Operation.forCode((String) sourceRecordChangeValue.get(OPERATION));
                Optional&amp;lt;InventoryEvent&amp;gt; event = getProductEvent(sourceRecord, operation);
                if (event.isPresent()) {
                    String jsonEvent = objectMapper.writeValueAsString(event.get());
                    kafkaProducer.send(new ProducerRecord&amp;lt;&amp;gt;(TOPIC_NAME, jsonEvent));
                }
            } catch (Exception e) {
                logger.error(ERROR_MESSAGE, e);
            }
        }
    }

    private Optional&amp;lt;InventoryEvent&amp;gt; getProductEvent(SourceRecord event, Envelope.Operation op) {
        final Struct value = (Struct) event.value();
        Struct values = null;

        // Since the operations for CREATE and READ are identical in handling,
        // they are combined into a single case.
        switch (op) {
            case CREATE:
            case READ:
            case UPDATE: // Handle UPDATE similarly to CREATE and READ, but you're now aware it's an update.
                values = value.getStruct("after");
                break;
            case DELETE:
                values = value.getStruct("before");
                if (values != null) {
                    Integer id = values.getInt32("id");
                    return Optional.of(new InventoryEvent(op.toString(), id, null, null));
                } else {
                    return Optional.empty();
                }

            default:
                // Consider whether you need a default case to handle unexpected operations
                return Optional.empty();
        }

        if (values != null) {
            String name = values.getString("name");
            Integer id = values.getInt32("id");
            Double price = (Double) values.get("price");
            return Optional.of(new InventoryEvent(op.toString(), id, name, price));
        } else {
            return Optional.empty();
        }

    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once everything is set up, let's see Debezium in action&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Start our Debezium server , this should subscribe to our database and push the recieved messages to kafka&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gbc3xolxspodc2ebn1t.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gbc3xolxspodc2ebn1t.gif" alt="SpringBoot app" width="464" height="342"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to Postgres and create/delete/update records.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l7g559mn6gb6fuuflyq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l7g559mn6gb6fuuflyq.gif" alt="Postgres Operations" width="634" height="184"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Ensure that all events are captured in Kafka using kafkdrop , and you should see 2 topics  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;bookstore_inventory_stream&lt;/strong&gt; ( &lt;em&gt;This topic contains the actual events corresponding to changes in our inventory table&lt;/em&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;debezium_bookstore_lsn&lt;/strong&gt; ( &lt;em&gt;This topic is utilized by Debezium to store the log sequence number up to which the engine has read. This ensures that in the event of restarts, Debezium can resume streaming from the precise position where it left off.&lt;/em&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xcdhg845kxw0hvija7b.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xcdhg845kxw0hvija7b.gif" alt="Kafdrop" width="522" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the data is available in kafka , we can create multiple consumers based on our need to notify users, will cover it in separate post. Here is overall setup and how it looks&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1jev3xyo2grnhlhf6zh5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1jev3xyo2grnhlhf6zh5.gif" alt="End to End" width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The complete source code is available on &lt;a href="https://github.com/goutham8290/embeddedDebeziumEngine"&gt;repository&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>cdc</category>
      <category>postgres</category>
      <category>docker</category>
    </item>
    <item>
      <title>Shipping Data in Real Time Debezium : Part 1</title>
      <dc:creator>Goutham</dc:creator>
      <pubDate>Sun, 17 Mar 2024 20:26:35 +0000</pubDate>
      <link>https://dev.to/gouthamsayee/shipping-data-done-easy-3305</link>
      <guid>https://dev.to/gouthamsayee/shipping-data-done-easy-3305</guid>
      <description>&lt;p&gt;&lt;em&gt;"In the digital age, businesses are on a constant quest to delve deeper into their data, aiming to glean insights that can propel their products and services to new heights. This journey, however, encounters a significant hurdle when dealing with distributed systems, where managing data isn't just about accessing it—it's about keeping up with it in real time. That's where Change Data Capture (CDC) and a open-source tool named Debezium come into play.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Debezium is designed to address these challenges head-on, offering businesses a way to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Do More with Your Data&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplify Your Applications&lt;/strong&gt; &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;React Quickly to Change&lt;/strong&gt; &lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;Data transformation begins with acknowledging the significance of every individual change.&lt;/code&gt; &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;I think it fits well for Debezium and You will also agree soon&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Our blog series on Debezium will be split into two parts. The first part will cover the basics—introducing you to Debezium, its importance, and how it revolutionizes data management in distributed systems. &lt;br&gt;
The second part will be a practical guide, walking you through how to set up and use Debezium, empowering you to implement real-time data synchronization and analysis.&lt;/em&gt;&lt;/p&gt;

&lt;h6&gt;
  
  
  Why a fancy name Debezium :D
&lt;/h6&gt;

&lt;blockquote&gt;
&lt;p&gt;The name is a combination of "DBs", as in the abbreviation for multiple databases, and the "-ium" suffix used in the names of many elements of the periodic table. Say it fast: "DBs-ium". If it helps, we say it like "dee-BEE-zee-uhm". &lt;a href="https://debezium.io/documentation/faq/#where_did_the_name_debezium_come_from"&gt;Source&lt;/a&gt;`&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flshacibgtw29xo35kj1n.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flshacibgtw29xo35kj1n.gif" alt="CDC Architecture" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Active Database Operations:&lt;/strong&gt; It all begins with your operational database, continuously processing create, update, and delete operations as part of its day-to-day functions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploying the Debezium Engine:&lt;/strong&gt; Integrate Debezium with your database by setting up the Debezium engine. This engine connects to your database, subscribing to change data capture (CDC) events. It acts as a dedicated listener for all database changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Transformation:&lt;/strong&gt; Debezium utilizes database-specific connectors that tap into the transaction logs. These logs are then translated by Debezium into a standardized, easily consumable format. This crucial step ensures that the change data is not only captured but also made ready for downstream processing without requiring direct queries against the database, thereby reducing overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streaming to Kafka:&lt;/strong&gt; The processed data is then streamed to Apache Kafka topics. This step effectively decouples data producers (your databases) from data consumers, ensuring that the data is reliably available for real-time consumption and further processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Consumption:&lt;/strong&gt; Finally, consumers (which can be any downstream applications, services, or data pipelines) subscribe to the relevant Kafka topics. They can now access and utilize the real-time data feed for analytics, replication, or any other use case that benefits from having immediate access to change events.&lt;/p&gt;

&lt;p&gt;This article explores how Debezium can effectively tackle business challenges. Ready to roll up our sleeves? Let's delve into &lt;a href="https://dev.to/gouthamsayee/debezium-implementation-in-postgres-2fjd"&gt;Part 2&lt;/a&gt;, where we'll walk through implementing it in just a few simple steps. Stay tuned!&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>cdc</category>
      <category>debezium</category>
      <category>realtimeanalytics</category>
    </item>
  </channel>
</rss>
