<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shredded Mustard</title>
    <description>The latest articles on DEV Community by Shredded Mustard (@yrafe).</description>
    <link>https://dev.to/yrafe</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yrafe"/>
    <language>en</language>
    <item>
      <title>"Listen to Yourself". Event sourcing for Domain Driven Design ... One Domain Event to Rule Them All</title>
      <dc:creator>Shredded Mustard</dc:creator>
      <pubDate>Mon, 02 Dec 2024 12:30:46 +0000</pubDate>
      <link>https://dev.to/yrafe/listen-to-yourself-one-domain-event-to-rule-them-all-event-sourcing-for-domain-driven-design-4pbc</link>
      <guid>https://dev.to/yrafe/listen-to-yourself-one-domain-event-to-rule-them-all-event-sourcing-for-domain-driven-design-4pbc</guid>
      <description>&lt;p&gt;&lt;strong&gt;Event-driven architecture&lt;/strong&gt; is evolving rapidly. Software architects have been developing event-driven architectural patterns to solve the problems that come with such a robust design. Executing a &lt;strong&gt;non-idempotent&lt;/strong&gt; operation might not be as simple as publishing a message to a queue. A non-idempotent operation is an operation that changes the state of our application. There are numerous patterns that dictate when and how to publish a non-idempotent event. When a "&lt;strong&gt;Fire and Forget&lt;/strong&gt;" event happens, we must be careful in dealing with the side effects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5igknhahum490vmegm6z.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5igknhahum490vmegm6z.jpeg" alt="Image description" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One problem to deal with is the &lt;strong&gt;Dual-Write&lt;/strong&gt; problem; Writing to two external systems in one transactional context. Often times a process involves writing to the database and then publishing an event to a message broker like Kafka. For example, in Event-Sourcing, after adding or updating a an entity to the database, a message is published to advertise the new state of the application.  &lt;/p&gt;

&lt;p&gt;This problem can be solved programmatically by implementing rollback mechanisms or by applying patterns like the Outbox pattern for example. The &lt;strong&gt;Listen to Yourself&lt;/strong&gt; pattern is also one of the solutions to such a problem, and it solves it in a very simple way.&lt;/p&gt;

&lt;p&gt;Instead of updating the database and then publishing an event, we only publish the event. The same component that publishes the event consumes the published event; hence the name "Listen to Yourself". In other words the component responsible for doing the operation listens to its own topic and consumes the messages that it sent to update its database. This means that when the event is published, the database update is still pending and may or may not happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Listen to yourself
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Before diving into this, make sure you know what Event-driven architecture is and some common patterns. You can read about it here &lt;a href="https://dev.to/yrafe/series/29296"&gt;Event Driven Architectural patterns&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The "&lt;strong&gt;Listen to yourself&lt;/strong&gt;" pattern adds the event &lt;strong&gt;publisher&lt;/strong&gt; to the List of event &lt;strong&gt;consumers&lt;/strong&gt;. You heard it right. The same component that is responsible for updating the database and firing the event becomes one of the event consumers. In other words, the publisher &lt;strong&gt;listens to itself&lt;/strong&gt;. However, the database update will be postponed to after firing the event.&lt;/p&gt;

&lt;p&gt;Why is this useful?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Minimize side effects&lt;/strong&gt;&lt;br&gt;
Executing all side effects concurrently minimizes the problems that would arise from a faulty message broker. If for example the message broker failed to write the event to the message queue for any reason, there will be no event and therefore no side effects as opposed to writing in the database first and then firing the event.  This is useful for event-sourcing which treats an event as state change. If there is no event, there is no state change.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Easy replay implementation&lt;/strong&gt;&lt;br&gt;
Event replays in event-sourcing is a very useful technique. If you want to implement a replay for the primary domain, it will be as easy as editing the primary domain consumer implementation, or extending its behavior. This way, we only update the database in the consumer implementation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Separation of concerns&lt;/strong&gt;&lt;br&gt;
When you structure your &lt;strong&gt;&lt;a href="https://martinfowler.com/eaaCatalog/domainModel.html" rel="noopener noreferrer"&gt;domain model&lt;/a&gt;&lt;/strong&gt; carefully, the domain event becomes a single source of truth that can provide great consistency across your services. This means that if you want to move some subdomains to a separate services, it can be done in a very short time and the logic will not change as the subdomains are already embedded in the domain event. You can then easily replay all domain events from the new services and tada; You have a new consistent subdomain database. This is the most important aspect of the "Listen to Yourself" pattern and arguably the part where you should spend most of your time contemplating.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Listen to Yourself in practice
&lt;/h2&gt;

&lt;p&gt;Consider an e-commerce store that allows the user to acquire free $10 in credits as soon as they sign up using a gift card.&lt;/p&gt;

&lt;p&gt;Whenever a new user is created, $10 is added to his wallet.&lt;/p&gt;

&lt;p&gt;In this scenario we have two domains.&lt;/p&gt;

&lt;p&gt;The first domain is the "User".&lt;br&gt;
The second domain is the "Gift".&lt;/p&gt;

&lt;p&gt;The "Wallet" is a subdomain of the User.&lt;/p&gt;

&lt;p&gt;Before you dive in make sure you have:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Java 11 or above installed on your machine&lt;/li&gt;
&lt;li&gt;Apache Maven installed&lt;/li&gt;
&lt;li&gt;Running instances of Kafka and MongoDB (We will use Docker compose for running both instances so if you have docker installed you do not need to install anything else)&lt;/li&gt;
&lt;li&gt;Any IDE or Text Editor&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You will find the full code source here &lt;a href="https://github.com/Shredded-Mustard/ltys-user-svc" rel="noopener noreferrer"&gt;https://github.com/Shredded-Mustard/ltys-user-svc&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us take a look at our dependencies which we will use our pom.xml file to define&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    &amp;lt;dependencies&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-data-mongodb&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-web&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.kafka&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-kafka&amp;lt;/artifactId&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.springframework.boot&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;spring-boot-starter-validation&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;3.3.5&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.apache.commons&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;commons-lang3&amp;lt;/artifactId&amp;gt;
            &amp;lt;version&amp;gt;3.12.0&amp;lt;/version&amp;gt;
        &amp;lt;/dependency&amp;gt;
        &amp;lt;dependency&amp;gt;
            &amp;lt;groupId&amp;gt;org.projectlombok&amp;lt;/groupId&amp;gt;
            &amp;lt;artifactId&amp;gt;lombok&amp;lt;/artifactId&amp;gt;
            &amp;lt;optional&amp;gt;true&amp;lt;/optional&amp;gt;
        &amp;lt;/dependency&amp;gt;
    &amp;lt;/dependencies&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will be Using the Spring framework's Web package along with Spring-Kafka.&lt;/p&gt;




&lt;p&gt;Our controller will look like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.noketchup.shop.user.controller;

import com.noketchup.shop.user.controller.dto.UserRequest;
import com.noketchup.shop.user.controller.dto.UserResponse;
import com.noketchup.shop.user.domain.UserDomainModel;
import com.noketchup.shop.user.domain.WalletDomainModel;
import com.noketchup.shop.user.producer.UserProducer;
import com.noketchup.shop.user.service.UserService;
import com.noketchup.shop.user.service.WalletService;
import com.noketchup.shop.user.service.mapper.UserMapper;
import jakarta.validation.Valid;
import lombok.RequiredArgsConstructor;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;

import java.util.List;

@RestController
@RequiredArgsConstructor
@RequestMapping("user")
public class UserController {

  private final UserService userService;
  private final WalletService walletService;
  private final UserProducer userProducer;
  private final UserMapper mapper;

  @GetMapping
  public ResponseEntity&amp;lt;UserResponse&amp;gt; getUserByUniqueIdentifier(
          @RequestParam(value = "email", required = false) String email,
          @RequestParam(value = "phoneNumber", required = false) String phoneNumber,
          @RequestParam(value = "username", required = false) String username
  ) {
    UserResponse userResponse = userService.getUserByUniqueParam(email, phoneNumber, username);
    return ResponseEntity.ok(userResponse);
  }

  @PostMapping
  public ResponseEntity&amp;lt;String&amp;gt; createNewUser(@RequestBody @Valid UserRequest userRequest) {
    //Validate all fields before taking any action
    userService.validateUser(userRequest);

    //Map Request to the Domain Model
    UserDomainModel userDomainModelObject = mapper.mapToUserDomain(userRequest);
    WalletDomainModel wallet = walletService.createNewEmptyWalletDomainObject();
    userDomainModelObject.setWallets(List.of(wallet));

    //Send Domain event
    userProducer.sendUserDomainEvent(userDomainModelObject);
    return ResponseEntity.accepted().body("User creation in progress");
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have two main endpoints.&lt;/p&gt;

&lt;p&gt;The GET endpoint gets a single user by email&lt;br&gt;
The POST endpoint Adds a new User to the database&lt;/p&gt;



&lt;p&gt;Our producer interface needs only to have one method which is produceUserDomainEvent&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.noketchup.shop.user.producer;


import com.noketchup.shop.user.domain.UserDomainModel;

public interface UserProducer {
  void sendUserDomainEvent(UserDomainModel userDomainModelObject);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also have to have a Consumer Interface which should also have just one method consumeUserDomainEventModel&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.noketchup.shop.user.consumer;

import com.noketchup.shop.user.domain.UserDomainModel;
import org.apache.kafka.clients.consumer.ConsumerRecord;

public interface UserConsumer {
  void consumeUserDomainEventModel(ConsumerRecord&amp;lt;String, UserDomainModel&amp;gt; userDomainModel);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also have to implement both interfaces like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.noketchup.shop.user.producer;

import com.noketchup.shop.user.domain.UserDomainModel;
import lombok.RequiredArgsConstructor;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.header.Header;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;

@Component
@RequiredArgsConstructor
public class UserProducerImpl implements UserProducer {
  @Value("${config.user.domain.topic}")
  private String userDomainTopic;


  private final KafkaTemplate&amp;lt;String, UserDomainModel&amp;gt; userDomainKafkaTemplate;

  public void sendUserDomainEvent(UserDomainModel userDomainModelObject) {
    ProducerRecord&amp;lt;String, UserDomainModel&amp;gt; record = new ProducerRecord&amp;lt;String, UserDomainModel&amp;gt;(
            userDomainTopic,
            userDomainModelObject.getId().toString(),
            userDomainModelObject
    );

    record.headers().add(new DomainEventHeader("NAME", "CREATE_NEW_USER"));
    userDomainKafkaTemplate.send(record);
  }

  public static class DomainEventHeader implements Header {

    private final String key;
    private final String value;

    public DomainEventHeader(String key, String value){
      this.key = key;
      this.value = value;
    }

    @Override
    public String key() {
      return key;
    }

    @Override
    public byte[] value() {
      return value.getBytes();
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.noketchup.shop.user.consumer;

import com.mongodb.MongoException;
import com.noketchup.shop.user.domain.UserDomainModel;
import com.noketchup.shop.user.exception.RetryableException;
import com.noketchup.shop.user.service.UserService;
import lombok.RequiredArgsConstructor;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;


@Component
@RequiredArgsConstructor
public class UserConsumerImpl implements UserConsumer {
  private final UserService userService;

  @Transactional
  @KafkaListener(topics = "${config.user.domain.topic}", groupId = "${spring.kafka.consumer.group-id}")
  public void consumeUserDomainEventModel(ConsumerRecord&amp;lt;String, UserDomainModel&amp;gt; userDomainModel) {
    UserDomainModel userDomainModelObject = userDomainModel.value();
    try {
      userService.commitDomain(userDomainModelObject);
    } catch (MongoException e) {
      throw new RetryableException("Retryable Exception", e.getMessage());
    }

  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;And to define our Mongo Repository as well&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package com.noketchup.shop.user.repository;

import com.noketchup.shop.user.model.User;
import org.springframework.data.mongodb.repository.MongoRepository;
import org.springframework.stereotype.Repository;

import java.util.Optional;
import java.util.UUID;

@Repository
public interface UserRepository extends MongoRepository&amp;lt;User, UUID&amp;gt; {
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Make sure to download the full code from &lt;a href="https://github.com/Shredded-Mustard/ltys-user-svc" rel="noopener noreferrer"&gt;https://github.com/Shredded-Mustard/ltys-user-svc&lt;/a&gt; or you can define the model and implement the User Service and Mappers in your own way.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And our properties file may look like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spring.application.name=User Service

# Database Configuration
spring.data.mongodb.host=localhost
spring.data.mongodb.port=27017
spring.data.mongodb.database=userdb
spring.data.mongodb.username=noketchupadmin
spring.data.mongodb.password=sausage
spring.data.mongodb.authentication-database = admin
spring.data.mongodb.auto-index-creation=true

# Kafka Configuration
spring.kafka.bootstrap-servers=localhost:29092
config.user.domain.topic=user

spring.kafka.consumer.group-id=@artifactId@-group
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
#spring.kafka.listener.ack-mode=MANUAL
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.listener.fixed-backoff.max-attempts=4
spring.kafka.listener.fixed-backoff.interval=1000
spring.kafka.consumer.properties.spring.json.trusted.packages=*

spring.kafka.producer.acks=all
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;To run Kafka and Mongo DB we will add a Docker Compose file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '2'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.2
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    ports:
      - 22181:2181

  kafka:
    image: confluentinc/cp-kafka:7.3.2
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
  mongo:
    image: mongo:latest
    restart: always
    ports:
      - 27017:27017
    environment:
      MONGO_INITDB_DATABASE: userdb
      MONGO_INITDB_ROOT_USERNAME: noketchupadmin
      MONGO_INITDB_ROOT_PASSWORD: sausage
    volumes:
      - ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
  mongo-express:
    image: mongo-express:latest
    restart: always
    ports:
      - 9081:8081
    depends_on:
      - mongo
    environment:
      ME_CONFIG_MONGODB_AUTH_USERNAME: noketchupadmin
      ME_CONFIG_MONGODB_AUTH_PASSWORD: sausage
      ME_CONFIG_MONGODB_URL: mongodb://noketchupadmin:sausage@mongo:27017/
      ME_CONFIG_MONGODB_ENABLE_ADMIN: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Now let us start our containers and run our application&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; docker compose up
&amp;gt; mvn compile
&amp;gt; mvn exec:java -Dexec.mainClass=com.noketchup.shop.user.UserServiceApplication
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can add a new User&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --location 'http://localhost:8080/user' \
--header 'Content-Type: application/json' \
--header 'Cookie: lng=en' \
--data-raw '{
    "username": "mustard",
    "dateOfBirth": "1996-03-02",
    "mobileNumber": "0123456789",
    "email": "mustard@onthebeat.com"
}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also fetch our user like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --location 'http://localhost:8080/user?email=mustard%40onthebeat.com' \
--header 'Cookie: lng=en'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and we get&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "id": "74b1e501-5d5c-4e38-9257-4654c301fd34",
    "username": "mustard",
    "dateOfBirth": "1996-03-02",
    "mobileNumber": "0123456789",
    "email": "mustard@onthebeat.com"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;To recap&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We first exposed our GET and POST endpoints&lt;/li&gt;
&lt;li&gt;We created the main domain event producer and consumer&lt;/li&gt;
&lt;li&gt;We Defined our Mongo Repository&lt;/li&gt;
&lt;li&gt;We configured Kafka and Mongodb&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Not only do we have our addNewUser command implemented, we also have a free Event that we can consume from any other service to hook another command to an operation chain at any point in time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One event to Rule them all&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The event is already there, all you have to do is implement your consumer&lt;/p&gt;

&lt;p&gt;In the Gifts service we will implement our consumer the same way we did in the User Service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@Component
@RequiredArgsConstructor
public class UserConsumerImpl implements UserConsumer {
  private final UserService userService;

  @Transactional
  @KafkaListener(topics = "${config.user.domain.topic}", groupId = "${spring.kafka.consumer.group-id}")
  public void consumeUserDomainEventModel(ConsumerRecord&amp;lt;String, UserDomainModel&amp;gt; userDomainModel) {
    UserDomainModel userDomainModelObject = userDomainModel.value();
    // Add gift credits to the newly created user or DO WHATEVER YOU WANT based on that information
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>softwareengineering</category>
      <category>eventdriven</category>
      <category>architecture</category>
      <category>designpatterns</category>
    </item>
    <item>
      <title>Event Sourcing</title>
      <dc:creator>Shredded Mustard</dc:creator>
      <pubDate>Mon, 04 Nov 2024 12:45:58 +0000</pubDate>
      <link>https://dev.to/yrafe/event-sourcing-4obk</link>
      <guid>https://dev.to/yrafe/event-sourcing-4obk</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Imagine data flowing continuously from one source to multiple destinations, passing through various stages of processing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is how we can visualize many modern applications. In building scalable systems, data management is central—along with preserving the history of that data. In traditional systems, when we update an entity's state in a database, the previous state is overwritten, and we lose any record of its past.&lt;/p&gt;

&lt;p&gt;Consider a &lt;strong&gt;banking&lt;/strong&gt; service that records a user’s account balance. Without logging each transaction, we wouldn’t know how the current balance was reached. When users review their balance, they expect to see a history of transactions that led to the current amount. Without this history, there's no clear way to explain how they arrived at their present balance.&lt;/p&gt;

&lt;p&gt;Another example is a &lt;strong&gt;shipping&lt;/strong&gt; service that allows users to track parcels. Once a package is shipped, tracking its journey is crucial for both user experience and problem-solving if the package gets lost. By tracing its path, we can identify exactly where things went wrong.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An event represents a fact, action, or change.&lt;br&gt;
In this approach, each event marks a state change in the application. Storing these events enables us to capture the state of the application at any given time. By &lt;strong&gt;replaying&lt;/strong&gt; these events, we can even restore the application to a previous state, almost like rewinding through a series of snapshots.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To understand &lt;strong&gt;event sourcing&lt;/strong&gt;, consider breaking down an application into domains—distinct business areas or groupings of related processes. When a domain’s state changes, it emits an event to signal this update.&lt;/p&gt;

&lt;p&gt;For example, an &lt;strong&gt;Account domain&lt;/strong&gt; might publish an &lt;em&gt;[Account Created]&lt;/em&gt; event when a new account is added for a user. This event is saved in an &lt;strong&gt;append-only&lt;/strong&gt; log, and other domains can interpret it as they see fit.&lt;/p&gt;

&lt;p&gt;By storing each event, we can replay them at any time to rebuild our application state. This replay feature is invaluable. Say we introduce a new service interested in past events—it can simply replay the event log to establish its state. For instance, if only the account service exists initially and a new &lt;strong&gt;User Service&lt;/strong&gt; is added later, this service can replay the event log to create an up-to-date list of accounts linked to each user.&lt;/p&gt;

&lt;p&gt;This also simplifies migrating or replatforming applications. If we decide to rebuild the application with a new technology, we only need to replay the event log to restore the data without worrying about database migrations or state loss.&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Picture it like this&lt;/u&gt;: a user logs into a banking app, registers an account, and opens a new debit account. Two primary domains are involved here—the &lt;strong&gt;User domain&lt;/strong&gt; and the &lt;strong&gt;Account domain&lt;/strong&gt;. As the user is created and stored, the User domain publishes a &lt;em&gt;[UserCreatedEvent]&lt;/em&gt;, while the Account domain follows with an &lt;em&gt;[AccountCreatedEvent]&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Many other interconnected domains may find these events useful. For example, if a &lt;strong&gt;fraud detection&lt;/strong&gt; feature is added to the Security domain, it can process the entire event log to analyze user behavior and account activity, identifying potential fraudulent actions. In scenarios like this, retaining event history proves immensely valuable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event Sourcing vs CRUD
&lt;/h2&gt;

&lt;p&gt;In typical &lt;strong&gt;CRUD&lt;/strong&gt; systems, updating an entity overwrites previous states without keeping a historical record. &lt;strong&gt;Event sourcing&lt;/strong&gt; addresses this by storing each event in an append-only log, preserving the complete change history.&lt;/p&gt;

&lt;p&gt;However, CRUD systems generally ensure &lt;strong&gt;strong consistency&lt;/strong&gt;, meaning all parts of the system see updated data immediately. In contrast, event sourcing provides eventual consistency. This means that if your application requires strong consistency across multiple domains in real time, event sourcing may not be the best fit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event sourcing and CQRS
&lt;/h2&gt;

&lt;p&gt;Event sourcing and &lt;strong&gt;CQRS&lt;/strong&gt; (Command Query Responsibility Segregation) work well together. By separating read and write operations, the write services can be considered the &lt;strong&gt;source of truth&lt;/strong&gt; and scaled as needed. The query services only read the event log to update their state without directly changing the application's state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Audibility and traceability&lt;br&gt;
Event sourcing allows seamless auditing across domains. Instead of implementing auditing logic in every service, an Audit service can be created to consume events directly from the event log. Additional information, like trace IDs or timestamps, can be added to event messages to enhance traceability with minimal effort.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scalability&lt;br&gt;
Combining event sourcing with advanced storage technologies and partitioning techniques makes it easier to scale and test the application, especially in high-demand environments.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Trade-offs
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Eventual consistency&lt;br&gt;
As mentioned, event sourcing only provides eventual consistency. This may not suit applications needing immediate data consistency, like stock trading platforms.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Storage concerns&lt;br&gt;
Events can accumulate rapidly, leading to significant storage demands. Careful management of event storage and strategies like snapshotting may be required to handle data efficiently.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Complexity in querying&lt;br&gt;
While simple events are easy to store, complex queries across large event logs can be challenging.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>eventdriven</category>
      <category>software</category>
      <category>softwareengineering</category>
      <category>architecture</category>
    </item>
    <item>
      <title>CQRS</title>
      <dc:creator>Shredded Mustard</dc:creator>
      <pubDate>Sun, 03 Nov 2024 12:16:57 +0000</pubDate>
      <link>https://dev.to/yrafe/cqrs-pattern-ndn</link>
      <guid>https://dev.to/yrafe/cqrs-pattern-ndn</guid>
      <description>&lt;p&gt;Distributed systems have long been evolving. Most distributed systems use the Database per Service concept, which means that each service has its own database that it is fully responsible for, and only the service can access this database to perform &lt;strong&gt;CRUD&lt;/strong&gt; operations. &lt;strong&gt;CRUD&lt;/strong&gt; stands for &lt;strong&gt;Create, Read, Update, and Delete&lt;/strong&gt;. In most modern applications, there is a &lt;strong&gt;Data Access component&lt;/strong&gt;, which acts as a contract between the application and the database. It is responsible for modeling the database and managing querying, inserting, updating, and deleting operations, all encapsulated within the CRUD Data Access Component.&lt;/p&gt;

&lt;p&gt;If our application needs to fetch some data from the database, for example, it communicates with the Data Access component to retrieve the required data. Most read operations may be straightforward, like "find entity by ID" or "find by Username." However, when dealing with write operations, things are often not as simple. Before performing write operations, a series of checks and validations may be necessary. For example, if we have a Tickets database and want to purchase a new ticket for User "X," we might first validate that the user meets the age requirement &lt;em&gt;[findUserAgeByID()]&lt;/em&gt;, then check ticket availability &lt;em&gt;[countTicketsByEventId()]&lt;/em&gt;, and only then proceed with updating the ticket status to "Pending." After that, we credit the User's bank account with the ticket's value and finally change the Ticket status to "Acquired." This chain of operations is unnecessary in a read flow, where, for example, the user simply wants to view their purchased tickets. In that case, retrieving data is as straightforward as &lt;em&gt;[findTicketsByUserId()]&lt;/em&gt;. This highlights that most application logic complexity lies in write operations, while read operations remain relatively simple.&lt;/p&gt;

&lt;p&gt;Another consideration with read operations is the frequent need for joins. Join operations are essential in relational databases. However, when applying the Database per Service principle, most join operations are eliminated, requiring communication with the responsible service to fetch the necessary data. For example, if an event organizer wants to view a list of each ticket along with the corresponding phone number of the user who purchased it, the Tickets Service would first fetch all tickets for the event. Then, for all associated users, we would need to communicate with the Users Service to retrieve their phone numbers. Although we could add a &lt;strong&gt;PHONE_NUMBER&lt;/strong&gt; field to the &lt;strong&gt;TICKET&lt;/strong&gt; table, such a design becomes cumbersome as the application evolves. For instance, if we want to add a feature where the vendor also sees users' emails, the database design grows increasingly complex, making it challenging to extend our service. If you've worked extensively with microservices, you may recognize this as a common problem. The simplest solution in practice is to accept the latency and fetch each user’s details separately or in batches by passing a list of IDs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple Write flow&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpw5w7e3worren4p97ty0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpw5w7e3worren4p97ty0.png" alt="Image description" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple Read flow&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl6jaudqkxl8p2fi481v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl6jaudqkxl8p2fi481v.png" alt="Image description" width="721" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To conclude, write operations are significantly more complex than read operations, and each microservice typically contains far more write logic than read logic. The CQRS pattern can be beneficial in addressing this complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  CQRS pattern
&lt;/h2&gt;

&lt;p&gt;CQRS stands for &lt;strong&gt;Command Query Responsibility Segregation&lt;/strong&gt;. As the name suggests, this pattern segregates commands and queries. &lt;strong&gt;Commands&lt;/strong&gt; are for &lt;strong&gt;insertion, update&lt;/strong&gt;, and &lt;strong&gt;deletion&lt;/strong&gt; operations, while queries are purely for &lt;strong&gt;reads&lt;/strong&gt; and &lt;strong&gt;joins&lt;/strong&gt;. Here’s how it works: each microservice handles write logic within its own database, as in traditional distributed systems. However, we introduce a separate service solely for querying, which has a different data source and contains groups of relevant resources often joined to construct a single entity. The Query service's database can differ completely from the Command database. The Query service is responsible only for fetching data. When the command service updates its database, it publishes an event containing the new or updated model. The Query service consumes these events and updates its database accordingly. When data needs to be fetched, the application queries the Query service, which can efficiently retrieve data and perform joins as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3jzaxqfzbp5aov2u13s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3jzaxqfzbp5aov2u13s.png" alt="Image description" width="398" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This model offers numerous advantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimized Datastore&lt;/strong&gt;&lt;br&gt;
We can select different database technologies for Command and Query databases. For example, if we find that a NoSQL database is more suitable for fetching data, we’ll use a NoSQL database for the Query service. Conversely, if a well-structured, constraint-oriented database is more appropriate for the Command service, we can use a relational database like MySQL or Oracle.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Better scalability&lt;/strong&gt;&lt;br&gt;
Consider a Posts service in a social media platform like Instagram. The read-to-write load ratio is enormous. A typical user might create a new post once a week, month, or even year, but could view dozens, hundreds, or even thousands of posts daily. It doesn’t make sense to scale a service that handles both reading and writing equally. With CQRS, we can scale each microservice according to its load, leading to higher performance and reduced costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Single Responsibility&lt;/strong&gt;&lt;br&gt;
Each service’s responsibility is clearly defined and distinct from others. A good engineer will know exactly where to make changes when a new requirement arises.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  CQRS in Practice
&lt;/h2&gt;

&lt;p&gt;In this example, we delegate all Command responsibilities to the Tickets Service and Users Service, as before, but we introduce a third service dedicated to querying data. We’ll use a MySQL database for the Tickets and Users services and MongoDB for the Query service&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf7stuuloyile5te6w90.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwf7stuuloyile5te6w90.png" alt="Image description" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, whenever a user purchases tickets, the Query database will have all the necessary information.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1ikbieer2dlarftsgug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1ikbieer2dlarftsgug.png" alt="Image description" width="748" height="103"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Trade-offs with CQRS
&lt;/h2&gt;

&lt;p&gt;There is a trade-off, of course. Unlike traditional CRUD operations, CQRS provides eventual consistency rather than immediate consistency, meaning that data will eventually be consistent across services but not necessarily right away. If you cannot accommodate this delay, CQRS may not be suitable for your application.&lt;/p&gt;

&lt;p&gt;We have learned how the CQRS pattern works and when to use it. Next, we will explore Event Sourcing, which often goes hand in hand with CQRS.&lt;/p&gt;

</description>
      <category>eventdriven</category>
      <category>software</category>
      <category>softwareengineering</category>
      <category>architecture</category>
    </item>
    <item>
      <title>SAGA Pattern</title>
      <dc:creator>Shredded Mustard</dc:creator>
      <pubDate>Mon, 28 Oct 2024 13:42:33 +0000</pubDate>
      <link>https://dev.to/yrafe/saga-pattern-31cl</link>
      <guid>https://dev.to/yrafe/saga-pattern-31cl</guid>
      <description>&lt;p&gt;When dealing with scalable applications, things can get messy quickly. If you don’t follow established programming principles and structured design patterns, adding new features can become hectic and time-consuming. You may not realize exactly when things went wrong, and you may be uncertain about the steps needed to fix them. This is especially true in distributed systems, where following well-structured design patterns is essential.&lt;/p&gt;

&lt;p&gt;In distributed systems, a crucial concept in practice is Database per Service. This means each microservice has its own exclusive database, for which it alone is responsible, and no other microservice can access or modify it in any way. This provides significant programming benefits like improved abstraction, loose coupling, separation of concerns, and adherence to the single responsibility principle. However, this can lead to common challenges when managing transactions.&lt;/p&gt;

&lt;p&gt;In traditional single-database systems, we often perform multiple database operations transactionally. A transaction has a beginning and an end, and as long as the transaction remains uncommitted (unfinished), everything done within it can be rolled back. All operations within a transactional context will either succeed together or fail together. Once we commit the transaction, there is no going back. This is known as an &lt;strong&gt;ACID&lt;/strong&gt; transaction. It ensures consistency across our system and guarantees that all tables or entities are updated correctly, preventing misinformation. If we want to undo a committed transaction, we must manually perform another transaction to reverse it.&lt;/p&gt;

&lt;p&gt;In distributed systems, things become more complex due to the principle of separation of concerns. Often, we cannot maintain a transaction across multiple microservices. The best solution to this is to avoid using transactions altogether, though designing a database to eliminate transactions entirely can be complex and demands critical thinking. However, complete avoidance of transactions is not always feasible.&lt;/p&gt;

&lt;p&gt;There are mechanisms to maintain a single transaction across multiple microservices, such as two-phase commits, but these are often slow, require heavy implementation, and can become complex and impractical.&lt;/p&gt;

&lt;p&gt;If strong consistency is essential for your application, you must adhere to atomic transaction rules. However, if you can allow for eventual consistency—where data becomes consistent over time rather than immediately—then there’s more flexibility.&lt;/p&gt;

&lt;p&gt;A popular design pattern for handling this is the &lt;strong&gt;SAGA Pattern&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Consider a ticket reservation web application. A user logs in and wants to return a ticket they’ve already purchased. To process the return, the server must perform three operations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Update the status of the Tickets to "Available"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove the ticket from the user’s acquired tickets list&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Issue a refund request&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If our application is monolithic, all operations would be performed in a single transactional context, meaning they would either all succeed or fail together. However, in a microservices setup, we must wait for each process to finish to determine success or failure. If the request fails, we need to manually roll back previous operations.&lt;/p&gt;

&lt;p&gt;Let’s first visualize our scenario in a distributed system.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjul4k8c70266iycsmzx6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjul4k8c70266iycsmzx6.png" alt="Image description" width="791" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The user requests a refund through the API Gateway, which sends a request to the Tickets microservice. The Tickets service changes the ticket status and sends a request to the User Service. The User Service updates the user’s ticket list and sends a request to the Payment Service to issue the refund. The Payment Service communicates with a third-party payment gateway provider to transfer the funds back to the user and updates its own database. While the flow may vary depending on the implementation, the issue of transaction management remains the same.&lt;/p&gt;

&lt;p&gt;If any request fails, we need to roll back previous operations to maintain consistency.&lt;/p&gt;

&lt;h2&gt;
  
  
  The SAGA Pattern
&lt;/h2&gt;

&lt;p&gt;The SAGA pattern addresses this by introducing a rollback mechanism across microservices. It is commonly used in distributed systems where a transactional context spans multiple microservices. The SAGA pattern breaks down a transaction into a series of smaller, individual steps, each managed by a single service. Together, these steps form a distributed transaction. If a step fails at any point in the SAGA, compensating actions (or rollback steps) are used to undo previous actions, ensuring data consistency. This approach applies to transactions that follow a sequence of operations.&lt;/p&gt;

&lt;p&gt;There are two types of SAGA patterns.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Orchestration-Based&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In an orchestration-based SAGA, a dedicated SAGA orchestrator service manages the transaction. The orchestrator knows which services to call in sequence. If one operation fails, all previous operations are rolled back, and the user is informed of the failure’s reason. To adapt our example to an orchestration-based SAGA, we’d add an Orchestrator service to control the flow.&lt;br&gt;
The Orchestrator executes each step sequentially. If a step succeeds, it moves to the next. If a step fails, it rolls back all previous actions.&lt;/p&gt;

&lt;p&gt;Here is how it goes&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43hirqj38e95xdbyiav3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43hirqj38e95xdbyiav3.png" alt="Image description" width="800" height="611"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Orchestrator service now has control over the entire process and knows how to handle any failure, allowing the user to see exactly what went wrong. However, this pattern introduces complexity and makes the orchestrator a single point of failure. Additionally, any new operation must be configured in the Orchestrator, including its rollback call.&lt;/p&gt;

&lt;p&gt;To reduce this coupling, we can use the Choreography-based SAGA pattern with event-driven architecture.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Choreography-Based&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As the name suggests, the Choreography-based SAGA pattern lets multiple services work together in a choreographed manner, with no central coordinator. Each service involved in the transaction triggers the next step and listens for events to decide its next action.&lt;/p&gt;

&lt;p&gt;If an operation fails, it triggers an event to roll back previous operations. For example, if the Users service fails to return the ticket, it publishes a "User Ticket List Update Failed" event, which only the Tickets Service listens to and uses to perform its rollback. If the Payment Service fails, it triggers a "Payment Refund Failed" event, which both the Tickets and Users services listen to, performing their respective rollbacks. To achieve this, we need a &lt;strong&gt;Message Broker&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now our happy flow looks like this&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7ueo2onapzckb9nrw1y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7ueo2onapzckb9nrw1y.png" alt="Image description" width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now to add the SAGA pattern mechanism&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw96oq293jzcgwv9c0avx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw96oq293jzcgwv9c0avx.png" alt="Image description" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, in the successful flow, each service listens for the preceding service’s success event. For instance, the Users service listens for a "Ticket Return Successful" event, while the Payment Service listens for a "User Ticket List Update Success" event. After the Payment Service completes processing, it notifies the user by publishing to the notification service.&lt;/p&gt;

&lt;p&gt;In the failure flow, each service listens for the failure event of the preceding service, triggering a chain of rollback events as needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  SAGA Pattern review
&lt;/h2&gt;

&lt;p&gt;The SAGA pattern divides a transaction into distributed steps, where a failure in one service triggers events that roll back previous steps, ultimately achieving consistency across the system.&lt;/p&gt;

&lt;p&gt;The SAGA pattern is ideal for microservices architectures requiring eventual consistency and flexibility in managing long-running, distributed transactions, especially in domains like e-commerce, travel, banking, and other transactional applications. However, the added complexity in designing and monitoring SAGA workflows should be considered based on your application’s requirements.&lt;/p&gt;

&lt;p&gt;When dealing with distributed systems, cross-microservice transactions should generally be avoided. While design solutions exist, they are often complex and add application overhead. Unlike in monolithic transactions, rolling back a distributed transaction is not straightforward and requires careful implementation and thorough testing. Therefore, whenever you apply such a pattern, proceed with caution and thoroughly test your implementation.&lt;/p&gt;

&lt;p&gt;The next pattern we will review is CQRS&lt;/p&gt;

</description>
      <category>eventdriven</category>
      <category>software</category>
      <category>softwareengineering</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Introduction to Event Driven Architecture</title>
      <dc:creator>Shredded Mustard</dc:creator>
      <pubDate>Sun, 27 Oct 2024 09:25:57 +0000</pubDate>
      <link>https://dev.to/yrafe/intro-to-event-driven-architecture-1o17</link>
      <guid>https://dev.to/yrafe/intro-to-event-driven-architecture-1o17</guid>
      <description>&lt;p&gt;Event-driven architecture has gained significant popularity in recent years. Advancements in technologies like message brokers and distributed systems have enabled architects to design software in an event-driven manner. But what exactly is event-driven architecture, and why is it so impactful? To understand its purpose and benefits, we first need to review the traditional request-response model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Traditional Request-Response Model
&lt;/h2&gt;

&lt;p&gt;Let’s take a basic example of a ticket reservation web application. Users log in to purchase tickets for an event they want to attend. The browser collects information from the user and sends a request to the server. The application then performs the following operations in sequence:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Checks if the requested number of tickets is available.&lt;/li&gt;
&lt;li&gt;Locks the tickets for the user, so no one else can request the same tickets simultaneously.&lt;/li&gt;
&lt;li&gt;Sends a request to a payment service and waits for a response confirming the payment.&lt;/li&gt;
&lt;li&gt;Changes the status of the tickets to "Acquired."&lt;/li&gt;
&lt;li&gt;Assigns the tickets to the user’s account.&lt;/li&gt;
&lt;li&gt;Finally, returns a response to the user with the ticket details and confirmation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In a typical distributed microservices environment, this flow might look like this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw61a5rpoz2heyswlr2j0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw61a5rpoz2heyswlr2j0.png" alt="Image description" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We assume that two microservices are running on our backend. The Tickets microservice accepts the user’s HTTP request, queries a Mongo database to check ticket availability, and updates the ticket status to "Pending". It then sends a payment order to the Payment microservice over HTTP. The Payment service issues a payment request to a third-party payment gateway provider. Once confirmed, it notifies the Tickets service, which finalizes the transaction by updating the database and assigning the tickets to the user. Finally, the Tickets service responds to the user with a confirmation.&lt;/p&gt;

&lt;p&gt;All these operations must be completed for the ticket to be successfully reserved.&lt;/p&gt;

&lt;p&gt;While this approach works well in theory and often in practice, a few challenges arise:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;User wait time&lt;/strong&gt;: The user must wait for all operations to complete before receiving a response.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thread blocking&lt;/strong&gt;: In a multithreaded environment, each thread is locked for several seconds or minutes while waiting for the processes to complete. This can cause issues during periods of high traffic, such as when tickets are first released, potentially leading to thread exhaustion.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unpredictable latency&lt;/strong&gt;: The payment gateway, especially if it involves user authentication via OTP, can introduce unpredictable delays.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For small-scale applications, these issues may not be critical, and adopting an alternative approach could seem unnecessary. However, as our application scales, latency and performance bottlenecks can emerge.&lt;/p&gt;

&lt;p&gt;We could reduce User wait time by responding to the user before sending the payment request.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdwge32kmaj89dutwb1a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdwge32kmaj89dutwb1a.png" alt="Image description" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;While this approach solves one problem, the server still remains blocked until the entire process completes, meaning it doesn’t solve the thread-blocking issue. We are still processing the request synchronously.&lt;/p&gt;

&lt;p&gt;Another approach to mitigate this is horizontal scaling. Horizontally scaling microservices might alleviate some issues, but it introduces cost concerns. Additionally, it doesn’t fully address database congestion, especially when dealing with large datasets or additional operations like checking a user's age for an age-restricted event or updating loyalty points. Also, we can only scale our own microservices but we have no control over third party service providers.&lt;/p&gt;

&lt;p&gt;The flow becomes more complex if we want to add new operations. Suppose we wanted to add a recommendation service that suggests new events based on the user's previous selections. Now, we need to send another request to a Recommendation microservice. Each new operation adds complexity and tight coupling, leading to a highly interconnected application.&lt;/p&gt;

&lt;p&gt;This is where event-driven architecture becomes a game-changer.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an Event
&lt;/h2&gt;

&lt;p&gt;An &lt;strong&gt;event&lt;/strong&gt; represents a fact, action or a state change. Imagine you're a manager at a financial company. One morning, you ask an employee, "I need a report on the quarterly income for Company X". The employee replies, "It will be ready in two hours". You go to your desk and proceed with your daily tasks, eventually forgetting about the report until you receive an email from the employee two hours later with the requested report. This scenario can be considered event-driven; you didn’t wait idly for the task to complete.&lt;/p&gt;

&lt;p&gt;Asking the employee for the report can be imagined as triggering an event.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;event&lt;/strong&gt; is always immutable. Events cannot be changed once they are sent. Events can also be stored indefinitely, and can be consumed by multiple consumers concurrently.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a Message Broker
&lt;/h2&gt;

&lt;p&gt;A message broker is a piece of intermediary software between applications and services, allowing them to exchange messages and communicate seamlessly. Unlike protocols like HTTP, message brokers format the message and store it in a message queue. This queue acts as a buffer, storing messages until consumed by the receiving application. The key components of a An event-driven model are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message Producers&lt;/strong&gt; produce the event (send the message to the message queue).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message Queues&lt;/strong&gt; store the message&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message Consumers&lt;/strong&gt; retrieve and process messages from the message queue. Multiple consumers can read the message concurrently from the queue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Message Brokers&lt;/strong&gt; Facilitate communication between producers and consumers, adding features like message routing, filtering, delivery acknowledgment, and transformation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applying Event-Driven Architecture to Our Example
&lt;/h2&gt;

&lt;p&gt;Now, let’s revisit our ticket reservation web application. If we could notify the user that we’re processing their request and promise to inform them once processing is complete via email or notification, we could solve the earlier issues and process the logic asynchronously.&lt;/p&gt;

&lt;p&gt;To reconstruct our ticket reservation flow as event-driven, let’s break down each step to understand how the new model operates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Ticket Reservation Request&lt;/strong&gt;&lt;br&gt;
The user sends the request to an API gateway. The Request is processed by the Tickets service. The service queries the database to check the tickets' availability and change the tickets status to "Pending". The service then sends a message to the message queue. The message can be formatted as JSON or any other formatting technology according to the message broker technology. Finally, the server responds to the user saying that his reservation is "Accepted" and being processed.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mjuycga6rr3vyzszxfi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1mjuycga6rr3vyzszxfi.png" alt="Image description" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above diagram may seem unusual since nothing yet comes out of the message queue. However, this illustrates a core concept in event-driven architecture: when a producer sends an event, delivery is only guaranteed within the context of the message queue; which means that the consumers are decoupled from the producers. The broker acknowledges only that the message has been stored , and the producers don’t need to know which service will consume it or how it will be processed. This decoupling enables flexible, scalable systems, as we’ll see later on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Payment Processing&lt;/strong&gt;&lt;br&gt;
The Payment service consumes the message from the queue, initiates a payment request to the third-party service, and waits for confirmation. Once received, it produces an event indicating successful payment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fin2w6bvwl72b8yaf5hv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fin2w6bvwl72b8yaf5hv6.png" alt="Image description" width="652" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Ticket Confirmation&lt;/strong&gt;&lt;br&gt;
The Tickets service consumes the event (Reads the message from the queue), changes the status of the tickets to "Acquired," and completes the reservation process by associating the tickets with the user’s account. Here, the Tickets service acts as a consumer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: User Notification&lt;/strong&gt;&lt;br&gt;
Although the ticket reservation is now finalized, the user still needs to be informed. The final step in the process could involve producing a notification event, which can be sent to an email service or another system to alert the user.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrxv2ytfkoykbqoexm2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrxv2ytfkoykbqoexm2y.png" alt="Image description" width="652" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Request-Response&lt;/strong&gt; vs &lt;strong&gt;Event-Driven&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now that we reviewed each approach, let us compare between them and see what we gained.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Synchronous vs Asynchronous&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the request-response model, the processing happens synchronously, which means that the user has to wait for the response, and if the server never replies this usually indicates a problem.&lt;/p&gt;

&lt;p&gt;In the event driven model, the producer doesn't need to wait for the response or even know how and by who the event will be consumed. The delivery of the message to the consumer is outside of its responsibility and control. This allows the producer to move on to the next task instead of waiting for a response it doesn't actually need.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Inversion of Control&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the request-response model, the sender, first, needs to know about the receiver. If the sender needs to communicate with multiple receivers, it needs to send a request to each receiver each with their own unique parameters. This makes the sender depend on the receivers of the request.&lt;/p&gt;

&lt;p&gt;In an event-driven model, the producer is not even aware of the consumers of the event. This completely decouples the sender from the receivers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Loose Coupling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The reason event-driven architecture and microservices go very well together is the fact that event-driven architecture changes the dynamics of dependencies dramatically. As our application logic gets more complicated, we can add more and more consumers that consume the same event with the same structure. The producer service is completely decoupled from consumers.&lt;/p&gt;

&lt;p&gt;Event-driven architecture offers numerous benefits, changing the way we view traditional applications. However, improper implementation can lead to significant issues. Before switching to such a model, it’s important to assess its suitability for your application. In the next articles, we’ll dive into popular event-driven architectural patterns and when to apply them. Read about the first important design pattern, "The SAGA Pattern," in the next article.&lt;/p&gt;

</description>
      <category>eventdriven</category>
      <category>software</category>
      <category>softwareengineering</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
