Hi devs :)
In this post, I’ll show you how to set up Apache Kafka using Docker and build a simple Kafka producer and consumer in C#. This is a step-by-step guide for developers who want to explore Kafka for real-time data streaming or message queuing in their applications.
Why Kafka?
Apache Kafka is a powerful, distributed, and scalable streaming platform designed to handle real-time data pipelines. Whether you're building a large-scale log processing system or a more modest service that needs reliable message queuing, Kafka is up to the task. Compared to traditional message brokers like RabbitMQ or Azure Service Bus, Kafka excels in high-throughput use cases and persistent message storage.
Let's dive in!
Prerequisites
Before we start, you’ll need the following:
- Docker installed (Docker installation guide).
- .NET SDK installed (.NET installation guide).
If you have these set up, you’re good to go.
Step 1: Setting Up Kafka Locally with Docker
Kafka requires Zookeeper to manage its cluster, but instead of manually installing Kafka and Zookeeper, we’ll use Docker to spin up both services quickly.
Start by creating a docker-compose.yml
file in an empty directory:
version: '3'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- "2181:2181"
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
Explanation:
- Zookeeper: Required for managing the Kafka cluster.
-
Kafka: Configured to communicate with Zookeeper and exposed on port
9092
.
Now, run the following command to start Kafka and Zookeeper:
docker-compose up -d
Docker will pull the necessary images and run both services in the background. To verify everything is running, you can check the running containers:
docker ps
You should see both kafka
and zookeeper
containers listed.
Step 2: Creating a Kafka Producer in C
Now that we have Kafka running locally, let’s create a Kafka producer in C# that sends messages to a Kafka topic.
First, create a new .NET Console application:
dotnet new console -n KafkaProducer
cd KafkaProducer
Next, add the Confluent.Kafka NuGet package, which is a client for Kafka written in C#:
dotnet add package Confluent.Kafka
Then, open the Program.cs
file and replace its content with the following Kafka producer code:
using Confluent.Kafka;
using System;
using System.Threading.Tasks;
class Program
{
public static async Task Main(string[] args)
{
var config = new ProducerConfig
{
BootstrapServers = "localhost:9092"
};
using (var producer = new ProducerBuilder<Null, string>(config).Build())
{
try
{
var message = "TransactionID: 12345 | Amount: $100 | AccountFrom: 123 | AccountTo: 456";
var deliveryReport = await producer.ProduceAsync("transactions", new Message<Null, string> { Value = message });
Console.WriteLine($"Message '{deliveryReport.Value}' delivered to '{deliveryReport.TopicPartitionOffset}'");
}
catch (ProduceException<Null, string> ex)
{
Console.WriteLine($"Error: {ex.Error.Reason}");
}
}
}
}
This simple producer sends a message to a topic called transactions
. Kafka will automatically create the topic if it doesn’t exist.
Step 3: Creating a Kafka Consumer in C
Now let’s create a consumer to read messages from the transactions
topic. To do this, we’ll create another .NET Console application.
dotnet new console -n KafkaConsumer
cd KafkaConsumer
Just like before, add the Confluent.Kafka package:
dotnet add package Confluent.Kafka
Now, replace the Program.cs
content with the following consumer code:
using Confluent.Kafka;
using System;
using System.Threading;
class Program
{
public static void Main(string[] args)
{
var config = new ConsumerConfig
{
GroupId = "transaction-group",
BootstrapServers = "localhost:9092",
AutoOffsetReset = AutoOffsetReset.Earliest
};
using (var consumer = new ConsumerBuilder<Ignore, string>(config).Build())
{
consumer.Subscribe("transactions");
var cancellationTokenSource = new CancellationTokenSource();
Console.CancelKeyPress += (_, e) =>
{
e.Cancel = true;
cancellationTokenSource.Cancel();
};
try
{
while (true)
{
var consumeResult = consumer.Consume(cancellationTokenSource.Token);
Console.WriteLine($"Received message: {consumeResult.Message.Value}");
}
}
catch (OperationCanceledException)
{
consumer.Close();
}
}
}
}
This consumer subscribes to the transactions
topic and reads messages as they come in.
Step 4: Running the Producer and Consumer
First, make sure Kafka is running via Docker:
docker-compose up -d
Then, start the KafkaProducer:
cd KafkaProducer
dotnet run
This will send a message to the Kafka topic transactions
.
Now, in a separate terminal, start the KafkaConsumer:
cd KafkaConsumer
dotnet run
You should see the message from the producer printed in the consumer’s terminal.
Step 5: Shutting Down
To stop Kafka, simply run:
docker-compose down
This will stop and remove the Kafka and Zookeeper containers.
Conclusion
And that’s it! You've successfully set up Apache Kafka locally using Docker and created both a Kafka producer and consumer in C#. With this setup, you can now start building more complex real-time systems that handle high-throughput data streams or asynchronous communication between services.
Happy coding!
Top comments (0)