DEV Community

Cover image for Understanding the Need for Apache Kafka
Aadarsh Nagrath
Aadarsh Nagrath

Posted on

Understanding the Need for Apache Kafka

Welcome back everyone to another exciting blog post! Today, we are going to discuss why Apache Kafka is essential, why it is so fast, and whether we actually need Kafka or not.

Recently, I came across an interesting question-> "Isn't it possible for databases to improve their throughput themselves by upgrading their technology and performance, so there would be no need for Kafka or any other kind of middleware service?"

This is a great question. Essentially, it suggests that we need Kafka because it has higher throughput, while databases have lower throughput. So, instead of using Kafka, why not just upgrade our databases to handle higher throughput?

To address this question, let's first understand the fundamental purpose of a database. If I ask you, "What is a database?", you might say, "A database is used to store data." This is correct but incomplete. A database also provides mechanisms to read and query the data in various ways, such as by ID, indexing, aggregating, and applying multiple conditions.

Why Do We Need Services Like Kafka?

.

Kafka is a distributed streaming platform that acts as an intermediary between data producers and consumers. It is designed to handle high throughput and provide low-latency access to data streams. The key feature that sets Kafka apart from traditional databases is its ability to process real-time data efficiently.

Durability and Storage Mechanism

First, let's talk about the durability of databases. Databases store data on durable storage mediums like hard disks or SSDs. This ensures that even if the server goes down and restarts, the data remains intact. Databases are designed to be durable, ensuring that data is not lost even in the event of a failure.

Kafka, on the other hand, primarily stores data in memory (RAM), which is much faster but not durable in the same way as databases. RAM is volatile, meaning that data stored in RAM is lost if the system is restarted. However, this volatility also makes RAM extremely fast.

Data Processing and Querying

.

Databases are not just about storing data; they also provide advanced mechanisms to read and process data efficiently. For example, you can set up primary keys, foreign keys, and indexes, and perform complex queries and aggregations on your data. Databases optimize the storage and retrieval of structured data, allowing for efficient querying and data manipulation.

Kafka, in contrast, is designed for high-throughput, real-time data ingestion. It allows applications to produce and consume data at high speeds, but it does not offer the advanced querying capabilities of a traditional database. Kafka is optimized for streaming large volumes of data quickly and reliably, which makes it ideal for use cases involving real-time data processing.

Structured vs. Unstructured Data

Databases are excellent for storing structured data, where the schema is defined, and data is organized in tables with rows and columns. This structured format allows for efficient querying and data manipulation.

Kafka excels at handling unstructured or semi-structured data, where the schema may not be predefined, and the data can come in various formats. For example, a real-time application like a delivery tracking system generates data continuously in an unstructured format. Storing this data directly in a database would be inefficient and would quickly overwhelm the database.

Real-Time Data Processing

.

Kafka is designed to handle real-time data streams, allowing for the ingestion and processing of data as it is generated. This is crucial for applications that need to process data in real time, such as monitoring systems, IoT applications, and real-time analytics.

In a real-time application, data is continuously generated and needs to be processed and stored quickly. Kafka acts as a buffer, ingesting the data quickly and allowing downstream consumers to process and store the data at their own pace. This decoupling of data ingestion and processing ensures that the system remains responsive and scalable.

Why Kafka Is Fast

The speed of Kafka can be attributed to several factors:

In-Memory Storage: Kafka stores data in memory (RAM) before writing it to disk. RAM is significantly faster than disk storage, allowing Kafka to process data at lightning speed.

Sequential Disk Writes: When Kafka writes data to disk, it does so sequentially. This reduces the overhead of disk seek time, making the write process more efficient.

Batch Processing: Kafka processes messages in batches, reducing the overhead of individual message processing. This improves throughput and reduces latency.

Example Use Case: Delivery Tracking

Let's take the example of a delivery tracking system, where a delivery driver generates location data continuously. This data needs to be processed in real-time to provide updates to users. Kafka can ingest this continuous stream of location data and allow consumers to process and store it.

A consumer application can aggregate the location data to compute the total distance traveled, the time taken for delivery, and other relevant metrics. This aggregated data can then be stored in a database for further analysis and reporting. Kafka ensures that the raw, unstructured data is ingested quickly, while the consumer application processes and stores the structured data in the database.

Kafka's Role in a Modern Data Architecture

In a modern data architecture, Kafka serves as a high-throughput, real-time data pipeline that bridges the gap between data producers and consumers. It allows for the decoupling of data ingestion and processing, ensuring that the system can scale and handle large volumes of data efficiently.

By using Kafka, we can ingest unstructured data quickly and reliably, process it to derive meaningful insights and store the structured data in a database for long-term storage and querying. This architecture ensures that we can handle real-time data streams effectively while leveraging the strengths of both Kafka and traditional databases.

I am sure you would also be interested to know - How Paypal scaled Kafka to 1.3 trillion daily messages

Conclusion

My Twitter

In summary, Kafka is essential for handling high-throughput, real-time data ingestion and processing. It complements traditional databases by providing a scalable and efficient mechanism for ingesting unstructured data and decoupling data ingestion from processing. Databases, on the other hand, are designed for durable storage and efficient querying of structured data. Together, Kafka and databases form a powerful combination that allows us to build scalable, real-time data architectures.

I hope this post has clarified why we need services like Kafka and how they fit into a modern data architecture. If you have any more questions or doubts, please feel free to ask in the comments section. Thank you for reading, and I'll see you in the next post!

Top comments (1)

Collapse
 
bobbyiliev profile image
Bobby Iliev

Great post! Well done 👏
In case that it might help, I also wrote a quick introduction to Kafka post a few months ago.