Apache Kafka® and Kubernetes are a perfect duo. Kubernetes provides a highly scalable, resilient orchestration platform that simplifies the deployment and management of Kafka clusters, so DevOps can spend less time dealing with infrastructure and more time building applications and services. Experts expect this trend to accelerate as more organizations use Kubernetes to manage their data infrastructure.
If you're in the planning stages, know that there are a range of options, beginning with whether to deploy Kafka yourself or to purchase a managed solution. The right answer will depend on a number of factors, including your budget (DIY is not always cheaper!), the skill level of your staff, and any rules and regulations that govern your industry or your company. This blog will walk you through the options to consider, so you can help your organization make the right choice.
Why DIY?
Self-managed or "do-it-yourself" (DIY) Kafka has some advantages. You'll have more control over your deployment, including whether to extend it across multiple clouds. It may be easier to align with your internal security and operations policies, accommodate your specific data-residency concerns, and better control costs.
In this scenario, your in-house staff must perform the following tasks:
- Setting up the infrastructure and storage
- Installing and configuring Kafka
- Setting up Apache Zookeeper™, if you must. (Zookeeper is now deprecated and will no longer be supported as of Kafka v. 4.0. After that point, Kafka will use KRaft, the Kafka Raft consensus protocol.)
- Monitoring and troubleshooting your clusters
- Security
- Horizontal and vertical scaling
- Replication (for disaster recovery and availability)
Is "managed" Kafka really more manageable?
"Managed" Kafka is a service you can purchase from some hyperscalers, such as Amazon Web Services, and other third-party vendors. While the initial cost of the service may give you sticker shock, you may save money on hosting and payroll in the long run. Note that some managed solutions may still require your team to have some level of Kafka expertise on board, especially during the setup phase.
With managed Kafka, you'll lose the ability to control your data residency. What's more, if you're not sure how much compute or storage space you'll need, you may end up with some surprise hosting costs.
While each Kafka vendor's exact offering varies a bit, hosted solutions include setup of the cloud infrastructure necessary to run Kafka clusters, including virtual machines, network, storage, backups, and security.
Most managed solutions (whether or not they include hosting), provide features that:
- Install and manage the Kafka software, including upgrades, patches, and security fixes.
- Monitor Kafka clusters for issues, such as running out of memory or storage space, and provide alerts or notifications when problems arise. These solutions usually also include tools for troubleshooting and resolving problems like the above.
- Ensure that data stored in Kafka clusters is durable and available by replicating data across multiple nodes and data centers.
- Perform a variety of additional functions, depending on the solution. For example, they may include features that easily install additional functionality—such as schema management, connectors, and ksqlDB—which allow you to easily integrate with other data systems, transform data and build real-time applications.
What to consider as you sort through the options
Your Kafka deployment will be as unique as your environment. You'll need to account for your cloud provider, the size of your deployment, the applications you're running, and the size of your company, among other factors.
In some companies, there may be two departments involved—one to install the clusters and set up the infrastructure and another to "administer" Kafka, which includes setting up topics, configuring the producers and consumers, and connecting it all to the rest of your application(s). Even if you have folks on board with some Kafka experience, they may not have the knowledge they need to set it up in a cloud or Kubernetes environment. So you may have to hire in this skill set or get training for your existing staff. It may take them a while to come up to speed. This indirect cost may not be trivial, especially for a smaller organization.
If you do have to hire folks, think through the range of tasks you might want them to work on. Who should you choose? As you search, keep in mind that many of the most qualified folks won't have the word "Kafka" in their titles. However, a quick search on LinkedIn turned up a few of the job titles that do:
- Kafka site reliability engineer (SRE)
- Staff software engineer, Kafka
- Kafka admin
- Kafka developer
- Kafka engineer
- Kafka support engineer
- Java developer with Kafka
Salary requirements for these folks will vary, depending on your location, the seniority of your candidates, the specific job responsibilities, and so on.
If you're a larger company, you may need to divide up the job by function (infrastructure and development). In smaller companies, you may want to hire folks who will have responsibilities beyond just your Kafka deployment. Either way, this is one of the major costs associated with DIY Kafka.
With a managed solution, you won't need as much Kafka expertise on board, since your provider will take care of most of the operational tasks involved. That said, as mentioned earlier, some solutions may still require you to perform a significant number of setup tasks. You'll still need staff to build your Kafka-based applications and/or integrate them into your application ecosystem.
Hosted versus non-hosted solutions
Depending on the Kafka solution you're considering, you'll need to think about hosting. While this is obvious in the DIY scenario, there are still decisions to make with managed Kafka. Some providers, such as Confluent and Amazon Managed Streaming for Apache Kafka (MSK), include cloud hosting as part of their solutions. Others, such as Aiven and Outshift's Calisti, are not hosted solutions. Still others, such as Instaclustr, give you the option to run your Kafka deployment in their cloud environment or use your own. So you'll need to factor in cloud cost and convenience as you make your choices.
Hybrid open source solutions
If you'd like the idea of using some of the features available in a managed Kafka solution but still want some control over your data, cloud compute and storage, consider using an open source solution. An example is Koperator, a Kubernetes operator that automates provisioning, management, autoscaling, and operations for Kafka clusters deployed to Kubernetes. Koperator provisions secure, production-ready Kafka clusters, and provides fine-grained configuration and advanced topic and user management through custom resources. Have a look at Koperator's readme.md and feel free to contribute to the project.
Learn more about Outshift open source and join our Slack community to be part of the conversation.
Top comments (0)