DEV Community

ANKUSH CHOUDHARY JOHAL
ANKUSH CHOUDHARY JOHAL

Posted on • Originally published at johal.in

How to Build Real-Time Analytics Dashboards with Apache Kafka 3.7 and Grafana 11 for Python 3.13 Apps

How to Build Real-Time Analytics Dashboards with Apache Kafka 3.7 and Grafana 11 for Python 3.13 Apps

Introduction

Real-time analytics dashboards let you visualize streaming data instantly, enabling faster decision-making for Python applications. This guide walks through building a complete pipeline using Apache Kafka 3.7 for event streaming, Grafana 11 for visualization, and Python 3.13 for data production.

Prerequisites

  • Python 3.13 installed locally
  • Apache Kafka 3.7 (download from official site)
  • Grafana 11 (download from Grafana labs)
  • InfluxDB 2.x (for storing time-series Kafka data)
  • Confluent Kafka Python client: pip install confluent-kafka
  • Kafka Connect InfluxDB Sink Connector

Step 1: Configure and Start Apache Kafka 3.7

Kafka 3.7 supports KRaft mode (no Zookeeper dependency) for production use. Extract the downloaded Kafka tarball:

tar -xzf kafka_2.13-3.7.0.tgz
cd kafka_2.13-3.7.0
Enter fullscreen mode Exit fullscreen mode

Generate a KRaft cluster UUID and format the log directory:

KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties
Enter fullscreen mode Exit fullscreen mode

Start the Kafka broker:

bin/kafka-server-start.sh config/kraft/server.properties
Enter fullscreen mode Exit fullscreen mode

Create a topic named app-metrics to receive Python app data:

bin/kafka-topics.sh --create --topic app-metrics --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1
Enter fullscreen mode Exit fullscreen mode

Step 2: Produce Data from Python 3.13 App

Use the Confluent Kafka client to send mock application metrics to the app-metrics topic. Create a file producer.py:

import json
import time
from confluent_kafka import Producer

# Kafka producer configuration
producer_conf = {
    'bootstrap.servers': 'localhost:9092',
    'client.id': 'python-metrics-producer'
}
producer = Producer(producer_conf)

# Mock metric generation
def generate_metric():
    return {
        'timestamp': int(time.time() * 1000),
        'app_id': 'python-app-001',
        'cpu_usage': round(30 + (time.time() % 50), 2),
        'memory_usage': round(40 + (time.time() % 30), 2),
        'request_count': int(time.time() % 100)
    }

# Produce messages continuously
try:
    while True:
        metric = generate_metric()
        producer.produce(
            topic='app-metrics',
            key=metric['app_id'],
            value=json.dumps(metric).encode('utf-8')
        )
        producer.flush()
        print(f"Produced: {metric}")
        time.sleep(1)
except KeyboardInterrupt:
    print("Producer stopped.")
Enter fullscreen mode Exit fullscreen mode

Run the producer: python3.13 producer.py to start sending metrics to Kafka.

Step 3: Set Up InfluxDB and Kafka Connect Sink

Install InfluxDB 2.x and start the service. Create a bucket named kafka-metrics and generate an API token with write permissions.

Download the Kafka Connect InfluxDB Sink Connector and add it to Kafka's libs directory. Create a Connect worker configuration (connect-influxdb.properties):

name=influxdb-sink
connector.class=io.confluent.connect.influxdb.InfluxDBSinkConnector
topics=app-metrics
influxdb.url=http://localhost:8086
influxdb.bucket=kafka-metrics
influxdb.org=your-org-id
influxdb.token=your-influxdb-token
value.converter=org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable=false
key.converter=org.apache.kafka.connect.storage.StringConverter
Enter fullscreen mode Exit fullscreen mode

Start Kafka Connect in distributed mode:

bin/connect-distributed.sh config/connect-distributed.properties
Enter fullscreen mode Exit fullscreen mode

Deploy the connector using the Connect REST API:

curl -X POST -H "Content-Type: application/json" -d @connect-influxdb.properties http://localhost:8083/connectors
Enter fullscreen mode Exit fullscreen mode

Kafka Connect will now sink all messages from app-metrics to InfluxDB in real time.

Step 4: Configure Grafana 11 Data Source

Start Grafana 11 and access the UI at http://localhost:3000 (default credentials: admin/admin).

Navigate to Connections > Data Sources > Add Data Source, select InfluxDB, and configure:

  • Query Language: Flux
  • URL: http://localhost:8086
  • Organization: Your InfluxDB org ID
  • Token: Your InfluxDB API token
  • Default Bucket: kafka-metrics

Click "Save & Test" to verify the connection.

Step 5: Build Real-Time Dashboard in Grafana 11

Navigate to Dashboards > New Dashboard > Add Visualization. Select the InfluxDB data source and write a Flux query to fetch metrics:

from(bucket: "kafka-metrics")
  |> range(start: -1h)
  |> filter(fn: (r) => r["_measurement"] == "app-metrics")
  |> filter(fn: (r) => r["_field"] == "cpu_usage" or r["_field"] == "memory_usage")
  |> aggregateWindow(every: 10s, fn: mean, createEmpty: false)
  |> yield(name: "mean")
Enter fullscreen mode Exit fullscreen mode

Customize the visualization (time series chart works best for real-time metrics). Add panels for request count, CPU, and memory usage. Set the dashboard refresh interval to 5 seconds to see real-time updates.

Conclusion

You now have a fully functional real-time analytics pipeline: Python 3.13 produces metrics to Kafka 3.7, which streams data to InfluxDB via Kafka Connect, and Grafana 11 visualizes the data in real time. This setup scales easily for production Python applications with high-throughput event streams.

Top comments (0)