Today I'm going to set Strimzi
kafka on kind cluster.
1. Kind
kind
is a tool for running local Kubernetes clusters using Docker container “nodes”.
kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
Kind: Install
# For M1 / ARM Macs
$ [ $(uname -m) = arm64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.30.0/kind-darwin-arm64
$ chmod +x ./kind
$ sudo mv ./kind /opt/homebrew/bin/kind
Kind: Creating a Cluster
# 1. First run docker desktop
# 2. create a cluster
$ kind create cluster # Default cluster context name is `kind`.
Creating cluster "kind" ...
⠈⠑ Ensuring node image (kindest/node:v1.34.0) 🖼
✓ Ensuring node image (kindest/node:v1.34.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
$ k config current-context
kind-kind
After creating a cluster, you can use kubectl to interact with it by using the configuration file generated by kind
By default, the cluster access configuration is stored in ${HOME}/.kube/config if $KUBECONFIG environment variable is not set.
Kind: CLI
#$ Get
$ kind get clusters
# Delete
$ kind delete cluster
# Info
$ kubectl cluster-info --context kind-{KIND_CLUSTER_NAME}
Kind: Advanced
Kind cluster with
vim ~/./kind-config.yml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
# nodes: 1 control plane node and 3 workers
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
kind create cluster --config kind-config.yml
❯ k get no
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 43s v1.34.0
kind-worker Ready <none> 29s v1.34.0
kind-worker2 Ready <none> 29s v1.34.0
kind-worker3 Ready <none> 29s v1.34.0
2. Kafka: Strimzi
2.1. Deploy Strimzi
First deploying the Strimzi cluster operator
$ kubectl create namespace kafka
$ kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
Apply the Strimzi install files:
- ClusterRoles
- ClusterRoleBindings
- some Custom Resource Definitions (CRDs)
CRDs define the schemas used for CRs such as Kafka, KafkaTopic and so on.
The YAML files for ClusterRoles
and ClusterRoleBindings
downloaded from strimzi.io contain a default namespace of kafka
- log
$ kubectl logs deployment/strimzi-cluster-operator -n kafka -f
Strimzi Quick Look
Cluster Operator
Manages Kafka clusters and related components
Topic Operator
Creates, configures, and deletes Kafka topics
User Operator
Manages Kafka users and their authentication credentials
The Cluster Operator can deploy the Entity Operator, which runs the Topic Operator and User Operator in a single pod. The Entity Operator can be configured to run one or both operators.
Operators within the Strimzi architecture
A standard Kafka deployment using Strimzi might include the following components:
- Kafka: cluster of broker nodes as the core component
- Kafka Connect: cluster for external data connections
- Kafka MirrorMaker: cluster to mirror data to another Kafka cluster
- Kafka Exporter: to extract additional Kafka metrics data for monitoring
- Kafka Bridge: to enable HTTP-based communication with Kafka
- Cruise Control: to rebalance topic partitions across brokers
2.2. Create an Apache Kafka cluster
kafka-persistent.yml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: controller
labels:
strimzi.io/cluster: my-cluster
spec:
replicas: 3
roles:
- controller
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 100Gi
kraftMetadata: shared
deleteClaim: false
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: broker
labels:
strimzi.io/cluster: my-cluster
spec:
replicas: 3
roles:
- broker
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 100Gi
kraftMetadata: shared
deleteClaim: false
---
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
annotations:
strimzi.io/node-pools: enabled
strimzi.io/kraft: enabled
spec:
kafka:
version: 4.0.0
metadataVersion: 4.0-IV3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
entityOperator:
topicOperator: {}
userOperator: {}
$ kubectl apply -f kafka-persistent.yml -n kafka
- Check
$ k get svc -A
(opt) IF YOU NEED External port
- kind with external
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
# nodes: 1 control plane node and 3 workers
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30092 # for bootstrap
hostPort: 30092
- containerPort: 30093 # for broker 0
hostPort: 30093
- containerPort: 30094 # for broker 1
hostPort: 30094
- containerPort: 30095 # for broker 2
hostPort: 30095
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: tcp # Optional, defaults to tcp
- role: worker
- role: worker
- role: worker
- kafka
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: controller
labels:
strimzi.io/cluster: my-cluster
spec:
replicas: 3
roles:
- controller
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 100Gi
kraftMetadata: shared
deleteClaim: false
---
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaNodePool
metadata:
name: broker
labels:
strimzi.io/cluster: my-cluster
spec:
replicas: 3
roles:
- broker
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 100Gi
kraftMetadata: shared
deleteClaim: false
---
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
annotations:
strimzi.io/node-pools: enabled
strimzi.io/kraft: enabled
spec:
kafka:
version: 4.0.0
metadataVersion: 4.0-IV3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
- name: external # 🔥
port: 9094
type: nodeport
tls: true
configuration:
bootstrap:
nodePort: 30092
brokers:
- broker: 0
nodePort: 30093
- broker: 1
nodePort: 30094
- broker: 2
nodePort: 30095
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
entityOperator:
topicOperator: {}
userOperator: {}
2.3. Pub / Sub
- producer
$ kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.47.0-kafka-4.0.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
- consumer
kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.47.0-kafka-4.0.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
2.4. Kafka: delete
# full delete kafka
$ kubectl delete namespace kafka
Top comments (0)