DEV Community

Cover image for Running Dapr on Kubernetes
Ivan Cvitkovic
Ivan Cvitkovic

Posted on

Running Dapr on Kubernetes

The distributed application runtime, Dapr, is a portable, event-driven runtime that can run on the cloud or any edge infrastructure. It puts together the best practices for building microservice applications into components called building blocks.

Each building block is completely independent so you can use one, some, or all of them in your application. Building blocks are extensible, so you can also write your own.

Dapr supports a wide range of programming languages and frameworks such as .NET, Java, Node.js, Go and Python. That means you can write microservice apps using your favorite tools and deploy them literally anywhere.

Architecture Diagram

Basically, building blocks are just HTTP or gRPC APIs that can be called from application code and use one or more Dapr components. They abstract some of the major challenges during development such as service-to-service communication, state managment, pub/sub, observability and more. Building blocks do not depend on underlying technology. This means if you need to implement, for example, pub/sub functionality you can use Apache Kafka, RabbitMQ, Redis Streams, Azure Service Bus or any other supported broker that interface with Dapr.

In this example we will show how to run Dapr on the Kubernetes cluster with two .NET applications. First one will send messages to Apache Kafka while the second one will read those messages and store them in Redis. Communication to Kafka and Redis will be realized using the Dapr Client, which means that we will not have any dependencies on NuGet packages like Confluent.Kafka or StackExchange.Redis.

Architecture diagram

Architecture Diagram


This demo requires you to have the following installed on your machine:

Also, clone the repository and cd into the right directory:

git clone
cd dapr-demo
Enter fullscreen mode Exit fullscreen mode

Step 1 - Setup Dapr on your Kubernetes cluster

The first thing you need is an RBAC enabled Kubernetes cluster. This could be running on your machine using Minikube/Docker Desktop, or it could be a fully-fledged cluser in Azure using AKS or some other managed Kubernetes instance from different cloud vendor.

Once you have a cluster, follow the steps below to deploy Dapr to it. For more details, look here

$ dapr init -k
⌛  Making the jump to hyperspace...
ℹ️  Note: To install Dapr using Helm, see here:

✅  Deploying the Dapr control plane to your cluster...
✅  Success! Dapr has been installed to namespace dapr-system. To verify, run `dapr status -k' in your terminal. To get started, go here:
Enter fullscreen mode Exit fullscreen mode

The dapr CLI will exit as soon as the kubernetes deployments are created. Kubernetes deployments are asyncronous, so you will need to make sure that the dapr deployments are actually completed before continuing.

Step 2 - Setup Apache Kafka

The easiest way to setup Apache Kafka on your Kubernetes cluster is by using Helm package manager. To install Helm on your development machine follow this guide.
We will use Bitnamy Library for Kubernetes to launch Zookeper and Kafka message broker.

helm repo add bitnami
helm install my-release bitnami/kafka
Enter fullscreen mode Exit fullscreen mode

Step 3 - Setup Redis

Just like Apache Kafka, easy way to spin up Redis on your Kubernetes cluster is by using Helm.

helm repo add bitnami
helm install redis bitnami/redis
Enter fullscreen mode Exit fullscreen mode

To verify the installation of Kafka and Redis run kubectl get all and you should see similiar output:

NAME                             READY   STATUS        RESTARTS   AGE
pod/my-release-kafka-0           1/1     Running       0          18m
pod/my-release-zookeeper-0       1/1     Running       0          18m
pod/redis-master-0               1/1     Running       1          11m
pod/redis-slave-0                1/1     Running       1          11m
pod/redis-slave-1                1/1     Running       1          11m

NAME                                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
service/kubernetes                      ClusterIP        <none>        443/TCP                      15d
service/my-release-kafka                ClusterIP   <none>        9092/TCP                     18m
service/my-release-kafka-headless       ClusterIP   None             <none>        9092/TCP,9093/TCP            18m
service/my-release-zookeeper            ClusterIP     <none>        2181/TCP,2888/TCP,3888/TCP   18m
service/my-release-zookeeper-headless   ClusterIP   None             <none>        2181/TCP,2888/TCP,3888/TCP   18m
service/redis-headless                  ClusterIP   None             <none>        6379/TCP                     11m
service/redis-master                    ClusterIP   <none>        6379/TCP                     11m
service/redis-slave                     ClusterIP     <none>        6379/TCP                     11m

NAME                                    READY   AGE
statefulset.apps/my-release-kafka       1/1     18m
statefulset.apps/my-release-zookeeper   1/1     18m
statefulset.apps/redis-master           1/1     11m
statefulset.apps/redis-slave            2/2     11m
Enter fullscreen mode Exit fullscreen mode

Step 4 - Create Dapr components in Kubernetes cluster

To deploy pub/sub and state store components make sure you are positioned in the right directory and then apply Dapr YAML manifests.

cd dapr-components
kubectl apply -f .\kafka.yaml
kubectl apply -f .\redis.yaml
Enter fullscreen mode Exit fullscreen mode

Step 5 - Deploy .NET Core applications

Now when all prerequisites are ready we can deploy our apps. To deploy .NET Core publisher and consumer applications make sure you are positioned in the right directory and then apply Kubernetes manifests.

cd k8s
kubectl apply -f .\publisher.yaml
kubectl apply -f .\consumer.yaml
Enter fullscreen mode Exit fullscreen mode

Each manifest contains Deployment object for the application and Service object for accessing the application through a browser.

Navigate to the localhost:8081/swagger and you will se our publisher app with a POST method on MessageController. This action sends a message to newMessage topic on Kafka pub/sub component. Communication between application and message broker is not performed directly. Dapr is running as a sidecar container inside publisher pod and handles the entire process of sending a message.

Our consumer application is running on localhost:9091 and is subscribed to newMessage topic on Kafka pub/sub component. When new message arrives it reads the content and trough the Dapr client saves it to Redis state store uneder the key message.

To test the entire process we can run Redis client pod and check if content is stored. First of all we will export password to REDIS_PASSWORD variable:

export REDIS_PASSWORD=$(kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 --decode)
Enter fullscreen mode Exit fullscreen mode

Then run the client with the following command:

kubectl run --namespace default redis-client --rm --tty -i --restart='Never' \
   --image -- bash
Enter fullscreen mode Exit fullscreen mode

and connect using Redis CLI:

redis-cli -h redis-master -a $REDIS_PASSWORD
Enter fullscreen mode Exit fullscreen mode

Now that you are connected to Redis you can use command HGETALL message that will return content of the message we sent to Kafka. With this we have confirmed that the whole process works.

If you want to find out more about Dapr, the best place to start is the official documentation.

Top comments (0)