DEV Community

Roger Viñas Alcon for Adevinta Spain

Posted on • Updated on

Spring Cloud Sleuth in action

Spring Cloud Sleuth is the solution for distributed tracing provided by Spring and comes with a bunch of useful integrations out of the box

GitHub logo rogervinas / spring-cloud-sleuth-in-action

🍀 Spring Cloud Sleuth in Action

This sample ☝️ uses some of these integrations executing the following flow:


To keep it simple everything will be executed within the same Spring Boot Application but at the end it is the same as if it was splitted between different services

Demo time!

Let's follow these steps to execute the demo:

  • Start the Spring Boot Application:

    SPRING_PROFILES_ACTIVE=docker-compose ./gradlew bootRun
  • Consume from the Kafka topic my.topic with kafkacat:

    kafkacat -b localhost:9094 -C -t my.topic -f '%h %s\n'
  • Execute a request to the first endpoint with curl or any other tool you like:

    The default format for context propagation is B3 so we use headers X-B3-TraceId and X-B3-SpanId

    curl http://localhost:8080/request1?payload=hello \
      -H 'X-B3-TraceId: aaaaaa1234567890' \
      -H 'X-B3-SpanId: bbbbbb1234567890'
  • Check application output:

    All lines should share the same traceId

    Started MyApplicationKt in 44.739 seconds (JVM running for 49.324) - traceId ? spanId ? - main
    >>> RestRequest1 hello  - traceId aaaaaa1234567890 spanId cf596e6281432fb9 - http-nio-8080-exec-7
    >>> KafkaProducer hello - traceId aaaaaa1234567890 spanId cf596e6281432fb9 - http-nio-8080-exec-7
    >>> KafkaConsumer hello - traceId aaaaaa1234567890 spanId 91e1b6b37334620c - KafkaConsumerDestination...
    >>> RestRequest2 hello  - traceId aaaaaa1234567890 spanId a1ac0233664f5249 - http-nio-8080-exec-8
    >>> RestRequest3 hello  - traceId aaaaaa1234567890 spanId bf384c3b4d97efe9 - http-nio-8080-exec-9
    >>> RestRequest4 hello  - traceId aaaaaa1234567890 spanId c84470ce03e993f1 - http-nio-8080-exec-1
    >>> AsyncService hello  - traceId aaaaaa1234567890 spanId acccead477b4e1c8 - task-3
  • Check kafkacat output:

  • Check zipkin at http://localhost:9411/zipkin/


Show me the code!

This demo was created using this spring initializr configuration

Just adding the sleuth dependency will enable tracing by default in any of the supported integrations, so as you will see no extra coding is needed (maybe only a few exceptions)


We need to add traceId and spanId values to the application log. In production we would use the logstash-logback-encoder to generate logs in JSON format and send them to an ELK but for the demo we use this plain text logback layout:

    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
            <pattern>%msg - traceId %X{traceId:-?} spanId %X{spanId:-?} - %thread%n</pattern>
    <root level="ERROR">
        <appender-ref ref="STDOUT" />
Enter fullscreen mode Exit fullscreen mode


Create your @RestController as usual

class MyRestController {

    val logger = LoggerFactory.getLogger(

    fun request1(@RequestParam("payload") payload: String): String {">>> RestRequest1 $payload")
        return "ok"
Enter fullscreen mode Exit fullscreen mode

Kafka Producer & Consumer

We have a few alternatives to propagate tracing information when publishing to kafka

For example, we can use Spring for Apache Kafka and create a KafkaProducer or KafkaConsumer using the autoconfigured KafkaProducerFactory or KafkaConsumerFactory. We can use the autoconfigured KafkaTemplate too

In this demo we use Spring Cloud Stream and Reactive Functions Support

  • Configure binding and function definitions:

              brokers: "localhost:9094"
              group: ${}
              destination: "my.topic"
              destination: "my.topic"
          definition: consumer;producer
  • The consumer is just a @Bean implementing Consumer<Message<PAYLOAD>>:

    class MyKafkaConsumer: Consumer<Message<String>> {
        val logger = LoggerFactory.getLogger(
        override fun accept(message: Message<String>) {
  ">>> KafkaConsumer ${message.payload}")
  • The producer is just a @Bean implementing Supplier<Flux<Message<PAYLOAD>>>:

    In this case we have to use MessagingSleuthOperators helper methods in order to preserve the tracing context when using reactive stream functions

    class MyKafkaProducer(private val beanFactory: BeanFactory) : Supplier<Flux<Message<String>>> {
        val logger = LoggerFactory.getLogger(
        val sink = Sinks.many().unicast().onBackpressureBuffer<Message<String>>()
        fun produce(payload: String) {
  ">>> KafkaProducer $payload")
            sink.emitNext(createMessageWithTracing(payload), FAIL_FAST)
        private fun createMessageWithTracing(payload: String): Message<String> {
            return MessagingSleuthOperators.handleOutputMessage(
                    MessagingSleuthOperators.forInputMessage(beanFactory, GenericMessage(payload))
        override fun get() = sink.asFlux()


Just create a RestTemplate @Bean and inject it wherever is needed

class MyConfiguration {
    fun restTemplate() = RestTemplate()
Enter fullscreen mode Exit fullscreen mode


Just declare the @FeignClient as usual

class MyApplication

@FeignClient(name = "request3", url = "http://localhost:\${server.port}")
interface MyFeignClient {
    @RequestMapping(method = [RequestMethod.GET], path = ["/request3"])
    fun request3(@RequestParam("payload") payload: String) : String
Enter fullscreen mode Exit fullscreen mode


Just create a WebClient @Bean and inject it wherever is needed

class MyConfiguration {
    fun webClient() = WebClient.create()
Enter fullscreen mode Exit fullscreen mode


Just annotate the method with @Async as usual. Tracing context will be preserved between threads

class MyApplication

class MyAsyncService {

    val logger = LoggerFactory.getLogger(

    fun execute(payload: String): CompletableFuture<String> {">>> AsyncService $payload")
        return CompletableFuture.completedFuture("ok")
Enter fullscreen mode Exit fullscreen mode


In production we would send to zipkin a small percentage of all the traces (sampling) but for the demo we will send all of them:

      probability: 1.0
    base-url: "http://localhost:9411"
Enter fullscreen mode Exit fullscreen mode

Docker compose

We use docker-compose with 3 containers: kafka, zookeeper and zipkin

You can either:

  • Execute docker-compose up -d separately and then run the application executing ./gradlew bootRun



One easy way to test the demo is running a SpringBootTest with an OutputCaptureExtension and verify that all logs contain the expected traceId and spanId values:

@SpringBootTest(webEnvironment = DEFINED_PORT)
class MyApplicationShould {
    fun `propagate tracing`(log: CapturedOutput) {
        val traceId = "edb77ece416b3196"
        val spanId = "c58ac2aa66d238b9"

        val response = request1(traceId, spanId)


        val logLines = await()
                .until({ parseLogLines(log) }, { it.size >= 7 })

        assertThatLogLineContainsMessageAndTraceId(logLines[0], "RestRequest1 hello", traceId)
        assertThatLogLineContainsMessageAndTraceId(logLines[1], "KafkaProducer hello", traceId)
        assertThatLogLineContainsMessageAndTraceId(logLines[2], "KafkaConsumer hello", traceId)
        assertThatLogLineContainsMessageAndTraceId(logLines[3], "RestRequest2 hello", traceId)
        assertThatLogLineContainsMessageAndTraceId(logLines[4], "RestRequest3 hello", traceId)
        assertThatLogLineContainsMessageAndTraceId(logLines[5], "RestRequest4 hello", traceId)
        assertThatLogLineContainsMessageAndTraceId(logLines[6], "AsyncService hello", traceId)
Enter fullscreen mode Exit fullscreen mode

That's it! Happy coding!

Ofertas Backend

Discussion (0)