DEV Community

Maksim Matlakhov
Maksim Matlakhov

Posted on • Originally published at blog.vibetdd.dev

Introducing a Hybrid Event Sourcing Framework for Modern Applications

Event sourcing has gained significant traction in recent years, promising complete audit trails, temporal queries, and robust system architecture. However, pure event sourcing often introduces complexity that can overwhelm development teams. Today, I want to introduce a hybrid event sourcing approach I've integrated into my framework that captures the benefits of event sourcing while maintaining operational simplicity.

The framework is designed to be compatible with the Event Modeling methodology, strictly following the command/event/read model pattern with clear boundaries between these building blocks. However, it's not limited to Event Modeling - we use a similar approach in my current company, and it works very well in practice.

The Challenge with Pure Event Sourcing

Based on my experience trying to implement various event-driven systems, pure event sourcing comes with real-world challenges that I've encountered:

Race Conditions and Conflict Resolution

One of challenges I've faced is managing race conditions in update operations. Consider this scenario: two users update a product status simultaneously. In pure event sourcing, this typically requires:

  1. Optimistic locking mechanisms - Using unique constraints, something like {aggregate_id, sequence_id}
  2. Conflict resolution logic - Determining which update wins and handling the "loser"
  3. Retry mechanisms - Failed operations must retry based on the latest system state
  4. User notification - Informing users about conflicts and requiring manual resolution

While these solutions work, they add significant complexity to the system. Event sourcing advocates often suggest that optimistic locking makes this "much simpler," but in my experience, implementing robust conflict resolution that handles all edge cases gracefully requires substantial engineering effort or relying on a magic using an event sourcing framework.

Read Model Performance and Complexity

Building read models from events presents several practical challenges:

  • Event replay overhead - Even with tons of events, projection rebuilding can be time-consuming
  • Multiple projection maintenance - Each new query pattern requires a new projection
  • Event schema evolution - Changing event structures requires migration of all dependent projections
  • Eventual Consistency - Read models are eventually consistent by nature, but can create user experience issues when immediate read-after-write consistency is expected
  • Debugging complexity - Troubleshooting issues requires understanding the entire event history

While the flexibility of having multiple projections (user_history, user_orders, disabled_users) is powerful, it comes with operational overhead that many teams underestimate.

The Root Issue

I want to clarify that these aren't inherent flaws in event sourcing - they're general distributed system challenges that pure event sourcing doesn't solve automatically. The theoretical benefits are compelling, but the practical implementation complexity often outweighs the advantages for many use cases.

My approach addresses these challenges by combining events with current state storage, making conflict resolution simpler and basic read operations more straightforward while preserving the audit trail and integration benefits that make event sourcing attractive.

My Hybrid Approach

Instead of pure event sourcing, I've developed a hybrid system that combines the audit trail benefits of events with the simplicity of current state storage. Here's how it works:

Core Principles

  1. Generate events for every action: User created, status changed, payout requested, order canceled
  2. Events belong to specific models: Similar to aggregates in event sourcing terms
  3. Transactional consistency: Events and model updates happen in a single transaction
  4. Independent model parts: Different aspects of a model can be updated independently with their own versioning

Model Structure

My models can be simple or have multiple parts that update independently:

// Simple model - single entity
data class Comment(
    val content: String,
    val author: String
)

// Complex model - multiple independent parts
data class Product(
    val description: ProductDescription,  // Can be updated independently
    val status: Status,                   // Can be updated independently  
    val price: ProductPrice               // Can be updated independently
)
Enter fullscreen mode Exit fullscreen mode

This design eliminates false conflicts. An admin updating product description won't conflict with another admin updating pricing, as they operate on different parts with separate versioning.

Events are controlled through a composite identifier consisting of modelId + action + version. Here are examples of how events are stored in MongoDB:

User Created Event:

{
  "_id": "6b846115-e41c-35db-ab27-12f8b3e99591",
  "topic": "user.model.created.v1",
  "event": {
    "body": {
      "personalData": {
        "name": "John Smith",
        "email": "example@vibetdd.dev"
      },
      "status": {
        "name": "ACTIVE"
      }
    },
    "metadata": {
      "eventId": "454460ab-cfb5-3b7d-a9c6-e39f13f2dd23",
      "modelId": "a1a87c49-670e-3844-a2df-368c77f207a9",
      "version": 1,
      "createdAt": "2025-09-25T09:51:07.712Z"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Personal Data Updated Event:

{
  "_id": "cef0220d-cd94-3aa3-af33-b68a7f3d0db9",
  "topic": "user.personal-data.updated.v1",
  "event": {
    "body": {
      "previous": {
        "name": "John Smith",
        "email": "example@vibetdd.dev"
      },
      "current": {
        "name": "Will Smith", 
        "email": "example@vibetdd.dev"
      }
    },
    "metadata": {
      "eventId": "3ae2cd7d-cba0-37cd-a6d4-3f0145571d4c",
      "modelId": "a1a87c49-670e-3844-a2df-368c77f207a9",
      "version": 1,
      "createdAt": "2025-09-25T09:52:03.046Z"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Event Storage Architecture

The event storage follows a clean separation:

  • Database per domain/service: Each service maintains its own events
  • Common event collection: All event types stored in a single table/collection
  • Current state storage: Separate storage for model current state
  • Message broker integration: Events processed asynchronously for consumers
  • Multiple event versions: Enables seamless migration between event DTO versions (this will be covered in a separate post)

A background processor constantly polls for pending events, determines which message brokers to send to, handles errors, and manages retries.

Example: Users and Payouts Services

Let's see how this works in practice with two services: users and payouts. The users service handles CRUD operations and sends events for every operation, while the payouts service consumes user events to make business decisions.

Users Service

The users service demonstrates the complete event creation flow. For updates, there are two cases: updating personal data with version control and status updates without version control.

Personal data updates require version control - the client must request the current version before updating and send it back. If versions don't match, a conflict exception is thrown. Status updates, however, don't require version control as they're considered independent operations.

Event Creation

Every operation starts with creating an event. Here's how I create a new user:

class CreateUserUseCase(
    private val idProvider: IdProvider<UUID>,
    private val validator: CommandValidator,
    private val eventOrchestrator: EventOrchestrator<User>
) : SaveCommandUseCase<CreateUserCommand, User> {

    override suspend fun execute(command: CreateUserCommand): Model<User> {
        // Step 1: Validate the incoming command
        validator.validate(command)

        // Step 2: Create the event command with all necessary data
        val event = CreateEventCommand(
            modelId = idProvider.generate(command.data.email), // Generate deterministic ID from email
            actor = command.actor, // Track who performed the action
            body = UserCreated( // The actual event body with business data
                personalData = PersonalData(
                    email = command.data.email,
                    name = command.data.name,
                ),
                status = Status(UserStatus.ACTIVE) // Default status for new users
            )
        )

        // Step 3: Use event orchestrator to persist event and create model in single transaction
        return eventOrchestrator.create(event) {
            // This lambda defines how to build the model from the event
            User(
                personalData = it.personalData,
                status = it.status
            )
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

The business logic only needs to:

  1. Build the proper event
  2. Create mapping to the related model
  3. Let the system handle storage, processing, and notification

Event Topic Structure

Every event is mapped to a topic following the format: model.subject.action.dto-version

enum class UsersEventTopicV1(
    override val eventClass: KClass<out EventDtoBody>,
    override val action: EventAction,
    override val model: String = "user",
    override val version: Int = 1,
) : EventTopic {
    CREATED(UserCreatedV1::class, EventAction.created()),
    DELETED(UserDeletedV1::class, EventAction.deleted()),
    PERSONAL_DATA_UPDATED(PersonalDataUpdatedV1::class, EventAction.updated(UserModelSubject.PERSONAL_DATA)),
    STATUS_UPDATED(UserStatusUpdatedV1::class, EventAction.updated(UserModelSubject.STATUS))
}

object UserModelSubject {
    const val PERSONAL_DATA = "personal-data"
    const val STATUS = "status"
}
Enter fullscreen mode Exit fullscreen mode

This generates topics like:

  • user.model.created.v1
  • user.personal-data.updated.v1
  • user.status.updated.v1

Event Mapping and Versioning

Events are mapped between internal models and external DTOs using dedicated mappers:

@Component
class UserCreatedMapper : EventMapper<UserCreated, UserCreatedV1>(
    modelClass = UserCreated::class,
    topic = UsersEventTopicV1.CREATED
) {

    override fun UserCreated.mapToDto() = UserCreatedV1(
        personalData = PersonalDataV1(
            name = personalData.name,
            email = personalData.email,
        ),
        status = StatusV1(
            name = status.name.name,
            message = status.message
        )
    )

    override fun UserCreatedV1.mapToModel() = UserCreated(
        personalData = PersonalData(
            name = personalData.name,
            email = personalData.email,
        ),
        status = Status(
            name = UserStatus.valueOf(status.name),
            message = status.message
        )
    )
}
Enter fullscreen mode Exit fullscreen mode

This mapping layer enables:

  • Version migration: Multiple versions of the same event can coexist
  • Backward compatibility: Consumers can upgrade at their own pace
  • Clean boundaries: Internal models remain separate from external contracts

Handling Updates

The framework supports two versioning approaches:

With Version Control - For critical data that requires conflict detection:

class UpdateUserPersonalDataUseCase(
    private val validator: CommandValidator,
    private val modelStorage: UserStoragePort,
    private val eventOrchestrator: EventOrchestrator<User>
) : SaveCommandUseCase<UpdateUserPersonalDataCommand, User> {

    override suspend fun execute(command: UpdateUserPersonalDataCommand): Model<User> {
        // Step 1: Get the current stored model
        val storedModel: Model<User> = modelStorage.getRequired(command.target.id)
        val updated: PersonalData = buildUpdated(storedModel, command) ?: return storedModel

        // Step 2: Validate the command
        validator.validate(command)

        // Step 3: Create event with expected version for conflict detection
        val event = CreateEventCommand(
            modelId = command.target.id,
            expectedVersion = command.target.versionRequired("Update personal data"), // Version control
            actor = command.actor,
            body = PersonalDataUpdated(
                previous = storedModel.body.personalData,
                current = updated,
            )
        )

        // Step 4: Update model with event orchestrator
        return eventOrchestrator.update(event) { event, model ->
            model.copy(
                personalData = event.current
            )
        }
    }

    // If there are no changed then return the same version
    private fun buildUpdated(storedModel: Model<User>, command: UpdateUserPersonalDataCommand): PersonalData? {
        val updated = storedModel.body.personalData.copy(
            name = command.data.name,
        )
        return if (storedModel.body.personalData == updated) null else updated
    }
}
Enter fullscreen mode Exit fullscreen mode

Without Version Control - For independent updates where conflicts are acceptable:

class UpdateUserStatusUseCase(
    private val modelStorage: UserStoragePort,
    private val eventOrchestrator: EventOrchestrator<User>
) : SaveCommandUseCase<UpdateUserStatusCommand, User> {

    override suspend fun execute(command: UpdateUserStatusCommand): Model<User> {
        // Step 1: Get the current stored model
        val storedModel: Model<User> = modelStorage.getRequired(command.target.id)
        val updated: Status<UserStatus> = buildUpdated(storedModel, command) ?: return storedModel

        // Step 2: Create event without expected version (no version control)
        val event = CreateEventCommand(
            modelId = command.target.id,
            actor = command.actor, // No expectedVersion parameter
            body = UserStatusUpdated(
                previous = storedModel.body.status,
                current = updated,
            )
        )

        // Step 3: Update model with event orchestrator
        return eventOrchestrator.update(event) { event, model ->
            model.copy(
                status = event.current
            )
        }
    }

    private fun buildUpdated(storedModel: Model<User>, command: UpdateUserStatusCommand): Status<UserStatus>? =
        if (storedModel.body.status.name == command.data.status) null
        else Status(command.data.status)
}
Enter fullscreen mode Exit fullscreen mode

Payouts Service

The payouts service demonstrates event consumption. It includes the users service events client dependency and creates a consumer to handle relevant user events.

Add Dependency

Include the events client dependency in your service:

<dependency>
    <groupId>vt.demo.service</groupId>
    <artifactId>users-client-events</artifactId>
    <version>${client.users.version}</version>
</dependency>
Enter fullscreen mode Exit fullscreen mode

Create Consumer

Create a consumer class and annotate methods for events you want to handle:

@EventConsumer
class UserEventsConsumerV1 {

    fun onCreated(event: EventV1<UserCreatedV1>) {
        // Handle me
    }

    fun onStatusUpdated(event: EventV1<UserStatusUpdatedV1>) {
        // Handle me
    }
}
Enter fullscreen mode Exit fullscreen mode

That's it! The service automatically receives events regardless of the transport mechanism (Kafka, RabbitMQ, SQS).

Conclusion

This hybrid event sourcing approach has proven highly effective in production systems I worked. It provides:

  • Audit trail completeness without operational complexity
  • Event-driven integration without pure event sourcing overhead
  • Conflict resolution that matches business requirements
  • Developer productivity with familiar patterns

The key insight is that you don't need pure event sourcing to get most of its benefits. By combining current state storage with comprehensive event logging, I achieve the best of both worlds: operational simplicity and event-driven architecture benefits.

The framework handles the complexity of event processing, message broker integration, and version management, letting developers focus on business logic rather than infrastructure concerns.

For teams considering event sourcing, I highly recommend exploring hybrid approaches. You might find, as I did, that the benefits are compelling while the operational burden remains manageable.

Top comments (0)