For developers building event-driven architectures with Camunda 8, the built-in Kafka Consumer Connector seems like a perfect, low-code solution. However, this convenience can come at a steep price in production: system instability. The connector can quickly lead to errors, bringing your entire engine down .
This critical failure happens by design. The connector activates a persistent, long-running Kafka listener for every single process definition that is deployed. When you follow the best practice of using unique consumer groups, you inadvertently launch a denial-of-service attack on your own Kafka broker, exhausting its connection limits and, in turn, overwhelming the Zeebe engine.
The solution is to decouple event consumption from process execution. This article provides a step-by-step guide to implementing the most robust and scalable alternative: the External Kafka Client (often referred to as a "Job Worker" or "External Task Worker"). This pattern shifts responsibility, centralizes control, and makes your architecture resilient.
The Architecture: From Embedded Connector to External Client
The Job Worker pattern fundamentally changes the interaction model. Instead of Camunda pulling messages from Kafka via an embedded connector, a standalone service listens to Kafka and pushes relevant events to Camunda.
Here’s the data flow:
- Event Published: An external system publishes a message to a Kafka topic.
- Centralized Consumption: Your custom-built "External Kafka Client" service—a single, dedicated microservice—is the only component consuming from that topic.
- Intelligent Forwarding: After consuming and parsing the message, the service uses the Camunda 8 Zeebe Client to interact with the process engine. It can either:
- Start a new process instance by sending a "start" command.
- Correlate the message to a specific, waiting process instance.
- Process Reacts: The Zeebe engine receives the command and either creates a new instance or moves an existing one forward.
This architecture immediately resolves the resource exhaustion problem by reducing potentially hundreds of Kafka connections to just one, managed entirely within your external client.
Sample Process: A Loan Application System
Let's model a loan application process that must wait for an external credit check to be completed.
- Start Event: The application is submitted.
- Service Task "Log Application": Initial data is processed.
- Intermediate Message Catch Event "Wait for Credit Score": The process instance pauses here, waiting for a message named
CreditCheckCompleted
. This is a standard BPMN message event, not a Kafka connector. - Service Task "Finalize Application": This task runs only after the credit score has been received.
- End Event: The process completes.
The crucial element is the "Wait for Credit Score" event. It is configured with a Message Name (CreditCheckCompleted
) and a Correlation Key (e.g., a unique applicationId
) to ensure the incoming event is delivered to the correct process instance.
Step-by-Step Implementation Guide
Step 1: Design and Deploy the BPMN Process
In the Camunda Web Modeler, create the loan application process. Configure the "Wait for Credit Score" message event:
- Message Name: Set it to a static value, for example,
CreditCheckCompleted
. - Correlation Key: Define a FEEL expression that points to a unique variable in your process, such as
=applicationId
.
Deploy this model to your Camunda cluster. Zeebe will now have an open subscription for a message named CreditCheckCompleted
for each running instance of this process.
Step 2: Build the External Kafka Client (Job Worker)
This is a standalone microservice. A Spring Boot application is an excellent choice due to its powerful integrations with Kafka and easy setup.
1. Add Dependencies
You'll need the spring-kafka
dependency for the consumer and the Camunda zeebe-client-java
for communication.
2. Configure the Zeebe Client
Create a bean to provide a configured ZeebeClient
instance that connects to your Camunda 8 cluster.
// In your Spring Boot application's configuration class
@Configuration
public class ZeebeClientConfiguration {
@Value("${camunda.client.cloud.cluster-id}")
private String clusterId;
@Value("${camunda.client.cloud.client-id}")
private String clientId;
@Value("${camunda.client.cloud.client-secret}")
private String clientSecret;
@Bean
public ZeebeClient zeebeClient() {
return ZeebeClient.newCloudClientBuilder()
.withClusterId(clusterId)
.withClientId(clientId)
.withClientSecret(clientSecret)
.build();
}
}
3. Implement the Kafka Listener
This is the core of your service. The @KafkaListener
annotation makes it incredibly simple to create a consumer.
// In a new service class
import com.fasterxml.jackson.databind.ObjectMapper;
import io.camunda.zeebe.client.ZeebeClient;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;
@Component
public class CreditCheckEventConsumer {
private static final Logger LOGGER = LoggerFactory.getLogger(CreditCheckEventConsumer.class);
private final ZeebeClient zeebeClient;
private final ObjectMapper objectMapper;
public CreditCheckEventConsumer(ZeebeClient zeebeClient, ObjectMapper objectMapper) {
this.zeebeClient = zeebeClient;
this.objectMapper = objectMapper;
}
// This method is your Kafka consumer
@KafkaListener(topics = "credit-check-results", groupId = "central-loan-processor")
public void consumeCreditCheckEvent(String message) {
try {
// 1. Parse the incoming Kafka message
CreditCheckResult event = objectMapper.readValue(message, CreditCheckResult.class);
LOGGER.info("Consumed credit check result for application: {}", event.getApplicationId());
// 2. Prepare the correlation command for Camunda
zeebeClient.newPublishMessageCommand()
.messageName("CreditCheckCompleted") // Must match the BPMN message name
.correlationKey(event.getApplicationId()) // Must match the BPMN correlation key
.variables(event.getPayload()) // Pass the payload as process variables
.send()
.join(); // Use .join() for synchronous send, or handle CompletableFuture for async
LOGGER.info("Successfully correlated message for application: {}", event.getApplicationId());
} catch (Exception e) {
LOGGER.error("Failed to process and correlate Kafka message", e);
// Implement your Dead-Letter Queue or retry logic here
}
}
}
// Helper DTO for parsing the message
// (implementation details for CreditCheckResult and its payload would be in separate files)
This code sets up a single, centrally managed consumer. When a message arrives on the credit-check-results
topic, it parses it and uses the zeebeClient
to publish and correlate it to Camunda.
Step 3: Test the End-to-End Flow
- Start an instance of your "Loan Application" process in Camunda, providing an
applicationId
. The instance will run and then wait at the "Wait for Credit Score" step. - Publish a message to your
credit-check-results
Kafka topic. The message's payload should be a JSON string that contains the sameapplicationId
and any other data you want to pass as variables. - Your External Kafka Client will consume the message, extract the
applicationId
, and send the correlation command to Camunda. - Observe as the waiting process instance immediately moves forward to the "Finalize Application" step.
Why the Job Worker Pattern is the Production-Ready Solution
- Complete Resource Control: You eliminate the risk of resource exhaustion by managing the Kafka consumer's lifecycle and connection pool yourself.
- Decoupling and Independent Scalability: Your consumer service is not tied to the process engine. If Kafka message volume increases, you can scale the consumer service horizontally without touching your Camunda deployment.
- Superior Error Handling: You can implement robust error handling, dead-letter queues, and backpressure policies directly in your service—something far more difficult to achieve inside a BPMN model.
- Cleaner Process Models: Your BPMN diagrams are no longer cluttered with technical infrastructure details. They become pure, easy-to-read representations of your business process.
Top comments (0)