DEV Community

Cover image for Spring Kafka Streams playground with Kotlin - I
Marcos Maia
Marcos Maia

Posted on • Updated on

Spring Kafka Streams playground with Kotlin - I

In this series I will cover the creation of a simple SpringBoot Application which demonstrates how to use Spring-kafka to build an application with Kafka Clients including Kafka Admin, Consumer, Producer and Kafka Streams.

The sample application will use the domain of Trading to give it a meaningful context.

I will be posting multiple articles in the next few days with the complete setup.

What are we building

We will build an application that simulates in a very simplified way the processing of quotes and leverage for stocks and sends those quotes in Kafka topics.

  • quotes -> stock-quotes-topic
  • leverage-prices -> leverage-prices-topic

We will then create a GlobalKTable with a materialize view for the leverage-prices so we can use it to join with incoming quotes and expose the most up to date leverage for a stock taking advantage of the local default materialized state store Rocksdb that is created for us automatically by Kafka Streams.

I wrote a quick article about Rocksdb in the past if you want more information about it.

With that information we will then create a Kafka Stream that combines such information i.e - when receiving a quote it will try to enrich the quotes with an existing leverage and then will:

  • Send Apple quotes "APPL" to it's specific topic -> appl-stocks-topic
  • Send GOOGL quote "GOOGL" to it's specific topic -> googl-stocks-topic
  • Send all other quotes to a different topic -> all-other-stocks-topic

  • Count total quotes per type.

  • Count total quotes per type per interval.

From there we will expose some endpoints to enable us to directly query data from the local storage created by the Kafka streams for leverage and total per interval using grouping, windowing, counting, state stores and Interactive queries from Kafka Streams implementations.

This is a visual overview of what we're building:

Overview of project

In this first part we will create the Spring Boot application, add the files to run local Kafka add the dependencies we need to build the project and the initial schemas.

Spring Boot App

Prerequisites

You will need Git, Java, Maven, Docker and Docker Compose or Kubernetes installed to follow this series.

I also assume you are familiar with running Spring Boot applications and using docker-compose or Kubernetes to run local infra required. If you are not please follow these other posts where you will find detailed instructions about each of those:

  1. Spring boot crash course - Shows how to create a spring-boot application using the Spring boot initializer site.

  2. One to run them all - Shows how to run Kafka infra with docker-compose and some tricks.

  3. Simplest Spring Kafka Consumer and Producer - Java - Shows how to develop a simple spring-kafka consumer and producer using Java.

  4. Simplest Spring Kafka Consumer and Producer - Kotlin - Shows how to devleop a simple spring-kafka consumer and producer using Kotlin.

  5. Running Kafka on Kubernetes for local development - Shows how to use Kubernetes Kafka infra for local development.

Creating the application

Go to the Spring Boot Starter site and create a project with the following configurations:

Project: Maven Project
Language: Kotlin
Spring Boot: 2.6.3
Dependencies: Spring Web
Project Metadata:

  • Group: com.maia
  • Artifact: simple-spring-kafka-stream-kotlin
  • Name: simple-spring-kafka-stream-kotlin
  • Description: Spring Kafka Kotlin Playground
  • Package name: com.maia.simplespringkafkastreamkotlin
  • Packaging: Jar
  • Java: 11

Spring Starter initial app configuration image

If you prefer you can also clone the sample repo of this project using git clone git@github.com:mmaia/simple-spring-kafka-stream-kotlin.git and to check the first step with only the initial app, to do that check out tag v1 git checkout v1 after cloning the project.

Once you create yourself extract the generated project or clone the repo pointing to tag v1 you can then build and run it to validate:

mvn clean package -DskipTests

and then:

mvn spring-boot:run

You should see the app running:

Console showing the app initialized

Add local infra

Let's now add the setup to run Kafka locally, I will be providing two setups so you can decide if you want to use docker-compose or Kubernetes. I will not go into details on the implementation as I have past posts covering both approaches as mentioned before:

  1. Docker compose approach is explained in the post [One to run them all]((https://dev.to/thegroo/one-to-run-them-all-1mg6), but in this project we will be using the official Confluent Kafka, schema-registry and zookeeper versions.

  2. Kubernetes approach is explained in the post Running Kafka on Kubernetes for local development

So make sure to add a docker-compose or Kubernetes setup of your preference before continuing to the next steps. You can also simple check out the v2 tag version of the example repository which contain those to take a look if you prefer: git checkout v2.

Run local infra

Again, please check aforementioned articles for further details and explanations if needed.

  1. Docker compose: from a terminal on the project root folder: docker-compose up -d

OR

  1. Kubernetes: open terminal on kubernetes folder and: kind create cluster --config=kind-config.yml and after Kind is started: kubectl apply -f kafka-k8s

Add this line kubernetes/tmp/* to your .gitignore file if you're using kubernetes so git will ignore the mapped volume files for kafka and zookpeeper.

Add Kafka maven dependencies

Make the following changes to the pom.xml file in the root folder of the project.

Add the confluent repository and plugin repository in order to download the confluent dependencies and be able to auto-generate the avro-schemas(more on that later).

<repositories>
    <repository>
    <id>confluent</id>           
        <url>https://packages.confluent.io/maven/</url>
    </repository>
</repositories>

<pluginRepositories>
    <pluginRepository>
    <id>confluent</id>           
        <url>https://packages.confluent.io/maven/</url>
    </pluginRepository>
</pluginRepositories>
Enter fullscreen mode Exit fullscreen mode

Add the version for the libraries we will be using inside the <properties> tag just after the tag:

<avro.version>1.11.0</avro.version>
<confluent.version>7.0.1</confluent.version>
<spring-kafka.version>2.8.2</spring-kafka.version>
<kafka.version>3.0.0</kafka.version>
Enter fullscreen mode Exit fullscreen mode

And finally add the actual library dependencies inside the <dependencies> tag:

<!-- kafka + confluent deps -->
<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-streams</artifactId>
    <version>${kafka.version}</version>
</dependency>
<dependency>
    <groupId>org.apache.avro</groupId>
    <artifactId>avro</artifactId>
    <version>${avro.version}</version>
</dependency>
<dependency>
    <groupId>io.confluent</groupId>
    <artifactId>kafka-avro-serializer</artifactId>
    <version>${confluent.version}</version>
</dependency>
<dependency>
    <groupId>io.confluent</groupId>
    <artifactId>kafka-streams-avro-serde</artifactId>
    <version>${confluent.version}</version>
</dependency>
Enter fullscreen mode Exit fullscreen mode

If you prefer you can checkout to the current status using Tag v3: git checkout v3.

Add initial avro schemas

Let's now add the initial Avro Schemas for Leverage and Quotes they will be the "contract" for our messages for the topics where we will initially send Leverage and Quote information as explained in the beginning of this post.

  1. Create a folder under src/main called avro
  2. Create a file inside this folder called: leverage-price-quote.avsc
  3. Create another file inside the same folder called: stock-quote.avsc

Add the following content to those files:

leverage-price-quote.avsc

{
 "namespace": "com.maia.springkafkastreamkotlin.repository",
 "type": "record",
 "name": "LeveragePrice",
 "fields": [
   { "name": "symbol", "type": "string"},
   { "name": "leverage", "type": "double"}
 ]
}
Enter fullscreen mode Exit fullscreen mode

stock-quote.avsc

{
 "namespace": "com.maia.springkafkastreamkotlin.repository",
 "type": "record",
 "name": "StockQuote",
 "fields": [
   { "name": "symbol", "type": "string"},
   { "name": "tradeValue", "type": "double"},
   { "name": "tradeTime", "type": ["null", "long"], "default": null}
 ]
}
Enter fullscreen mode Exit fullscreen mode

In the pom.xml file under the <build><plugins> section add the following avro plugin configuration:

<!-- avro -->
<plugin>
  <groupId>org.apache.avro</groupId>
  <artifactId>avro-maven-plugin</artifactId>
  <version>${avro.version}</version>
  <executions>
    <execution>
      <phase>generate-sources</phase>
      <goals>
    <goal>schema</goal>
      </goals>
    </execution>
  </executions>
</plugin>
Enter fullscreen mode Exit fullscreen mode

The avsc files will be used as a contract definition for the payload you will be sending and consuming from Kafka topics. The confluent Kafka clients enforce that for you. Enforcing a contract can avoid many issues(like poison pills) and together with the Schema Registry provide a framework for consistent incremental versioning of your messages.

Important to mention that the Confluent Schema registry also supports formats other than Avro like Protobuff and Json if you prefer one of them over Avro.

You can now download build the project so the avro plugin will generate the Java Objects we will use in our Producers next.

mvn clean package -DskipTests

This should generate the java objects automatically under: `target/generated-sources/avro/com/maia/springkafkastreamkotlin/repository.

Image description

You may git checkout v4 to check out the code up to this point.

Congratulations you have reached the end of the first part of this tutorial series. You can now take a rest and when you're ready move to the second part, which will be published soon.

Where we will be creating the Topics using Kafka Admin Client and also create the Kafka Producers and Rest Endpoints to enable us to create messages and send them to Kafka.

Photo by Joshua Mayo on Unsplash

Top comments (0)