<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dave Klein</title>
    <description>The latest articles on DEV Community by Dave Klein (@daveklein).</description>
    <link>https://dev.to/daveklein</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/daveklein"/>
    <language>en</language>
    <item>
      <title>Getting Started as a Kafka Developer</title>
      <dc:creator>Dave Klein</dc:creator>
      <pubDate>Mon, 30 Jan 2023 20:28:41 +0000</pubDate>
      <link>https://dev.to/daveklein/getting-started-as-a-kafka-developer-2b5n</link>
      <guid>https://dev.to/daveklein/getting-started-as-a-kafka-developer-2b5n</guid>
      <description>&lt;h3&gt;
  
  
  Intro – why then how
&lt;/h3&gt;

&lt;p&gt;So, you’ve decided to pursue a career developing applications with Apache Kafka&lt;sup&gt;®&lt;/sup&gt;. You’ve made a wise decision. Kafka is used by over 80% of Fortune 500 companies, and Kafka development ranks as one of the highest-paying job skills in IT. And besides all the practical stuff, it’s a lot of fun to work with! &lt;/p&gt;

&lt;p&gt;Now you’re probably wondering what’s the best way to get started in this new career. In this article, we’ll discuss some tips, strategies, and resources to get you off to a great start on this exciting journey.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developer vs. administrator
&lt;/h3&gt;

&lt;p&gt;First, you should be clear about what type of Kafka work you’d like to pursue. There are a variety of roles in this space, but they mainly fall into one of two camps: developer or administrator. Since I’m a developer, that will be the main focus of this article, but if you lean more toward the administrator or operator role, you may still be able to glean some insights. &lt;/p&gt;

&lt;p&gt;For developers, there’s a range of opportunities, such as event-driven microservices, real-time analytics, data pipelines, stream processing, and more. You can get a good feel for some of the ways to use Kafka by perusing these &lt;a href="https://developer.confluent.io/use-case/" rel="noopener noreferrer"&gt;use cases&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you have an idea of the type of work you’d like to do, it’s time to drill into more specifics, like programming languages, industries, and types of companies. &lt;/p&gt;

&lt;h3&gt;
  
  
  Lost in translation
&lt;/h3&gt;

&lt;p&gt;When we talk about software development, we tend to think in terms of the language we’re most familiar with, but there is Kafka client support for many languages so I want to be careful not to make assumptions. I mean, you’re probably programming in Haskell, but you might not be 🙂. So we need to consider language options and opportunities. The client libraries that ship with Kafka are in Java and will work with most JVM languages, but there are also very good libraries available for &lt;a href="https://developer.confluent.io/learn-kafka/kafka-python/intro/" rel="noopener noreferrer"&gt;Python&lt;/a&gt;, &lt;a href="https://developer.confluent.io/get-started/c/" rel="noopener noreferrer"&gt;C/C++&lt;/a&gt;, &lt;a href="https://developer.confluent.io/get-started/dotnet/" rel="noopener noreferrer"&gt;.NET&lt;/a&gt;, &lt;a href="https://developer.confluent.io/get-started/nodejs/" rel="noopener noreferrer"&gt;JavaScript&lt;/a&gt;, &lt;a href="https://developer.confluent.io/get-started/go/" rel="noopener noreferrer"&gt;Go&lt;/a&gt;, and others. &lt;/p&gt;

&lt;p&gt;The level of support and popularity of these different Kafka clients varies, with Java being the most popular and having the strongest support, both from Confluent and the community (e.g., external libraries, books, tutorials). So, if you don’t already have a preference, Java might be the way to go. A related consideration is who, in your area, is hiring and what language(s) are they using. A search on a job site, like &lt;a href="https://www.indeed.com/" rel="noopener noreferrer"&gt;Indeed&lt;/a&gt; or &lt;a href="https://www.dice.com/" rel="noopener noreferrer"&gt;Dice&lt;/a&gt;, can be helpful there.&lt;/p&gt;

&lt;h3&gt;
  
  
  Learning resources
&lt;/h3&gt;

&lt;p&gt;Once you’ve decided on a language to focus on, it’s time to start filling those knowledge gaps. Don’t be discouraged by this step. We all have knowledge gaps, and filling them can be very rewarding. First, you’ll want to make sure you have a good understanding of Kafka basics. Fortunately, there are many resources to help you with this. A &lt;a href="https://www.google.com/search?q=apache+kafka+books&amp;amp;oq=apache+kafka+books" rel="noopener noreferrer"&gt;web search&lt;/a&gt; will turn up many great books and other resources. And, of course, &lt;a href="https://developer.confluent.io/" rel="noopener noreferrer"&gt;Confluent Developer&lt;/a&gt; offers interactive courses ranging from introductory (&lt;a href="https://developer.confluent.io/learn-kafka/apache-kafka/events/" rel="noopener noreferrer"&gt;Apache Kafka 101&lt;/a&gt;) to advanced (&lt;a href="https://developer.confluent.io/learn-kafka/architecture/get-started/" rel="noopener noreferrer"&gt;Kafka Internals&lt;/a&gt;), full documentation, and other content to help you get started.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Kafka ecosystem
&lt;/h3&gt;

&lt;p&gt;The Kafka ecosystem is continually growing, but there are some key components that anyone looking to work with Kafka should be familiar with. &lt;/p&gt;

&lt;h4&gt;
  
  
  Clients
&lt;/h4&gt;

&lt;p&gt;We’ve already mentioned the clients. In whichever language you choose, there will be some code available for producing data to Kafka, consuming data from Kafka, and administering resources and configurations on Kafka brokers. Developers should be proficient with these. Aspiring administrators should be familiar with them, as they will affect how users deal with your Kafka cluster. &lt;/p&gt;

&lt;h4&gt;
  
  
  Kafka Connect
&lt;/h4&gt;

&lt;p&gt;Kafka Connect is a framework built on top of the Kafka Producer and Consumer client libraries. It allows you to integrate external systems, such as databases, analytics engines, and SaaS applications with Kakfa using plugins, called &lt;strong&gt;connectors&lt;/strong&gt;, and some configuration. There are hundreds of connectors available, but the best source of vetted connectors is the &lt;a href="https://www.confluent.io/hub" rel="noopener noreferrer"&gt;Confluent Hub&lt;/a&gt;. There you will find over 200 connectors with all the information you need to use them.&lt;/p&gt;

&lt;h4&gt;
  
  
  Kafka Streams
&lt;/h4&gt;

&lt;p&gt;Kafka Streams is a Java library that provides powerful APIs and an easy-to-use DSL for building stateless and stateful event streaming applications. Kafka Streams can only be used with JVM languages, but even developers not working with a JVM language would benefit from understanding how it works and the role it plays. There are similar libraries in other languages, and an understanding of Kafka Streams will help you evaluate them.&lt;/p&gt;

&lt;h4&gt;
  
  
  Schema Registry
&lt;/h4&gt;

&lt;p&gt;Kafka producers and consumers execute separately and don’t know anything about each other, but they do need to know about and agree upon the format of the data they are working with. This is where schemas come into play, and if you’re using schemas with Kafka, you really should be using the &lt;a href="https://docs.confluent.io/platform/current/schema-registry/index.html" rel="noopener noreferrer"&gt;Confluent Schema Registry&lt;/a&gt;. Schema Registry provides a way to store and retrieve schemas, as well as a means of versioning your schemas as they evolve. It works well with most Kafka client libraries and can also be accessed directly via HTTP. You can learn more about it with the &lt;a href="https://developer.confluent.io/learn-kafka/schema-registry/key-concepts/" rel="noopener noreferrer"&gt;Schema Registry 101&lt;/a&gt; course on Confluent Developer.&lt;/p&gt;

&lt;h4&gt;
  
  
  ksqlDB
&lt;/h4&gt;

&lt;p&gt;As we discussed earlier, Kafka Streams is only available on the JVM, but there is another powerful tool for creating event-streaming applications with Kafka, and that is &lt;a href="https://docs.confluent.io/platform/current/ksqldb/index.html" rel="noopener noreferrer"&gt;ksqlDB&lt;/a&gt;. ksqlDB is an application that runs in its own cluster that allows you to build streaming applications with SQL. ksqlDB supports filtering, aggregation, transformation, and joining of event streams and tables based on Kafka topics. It’s REST API allows you to interact with those applications from just about any programming language. For more information check out &lt;a href="https://developer.confluent.io/learn-kafka/ksqldb/intro/" rel="noopener noreferrer"&gt;ksqlDB 101&lt;/a&gt; or &lt;a href="https://developer.confluent.io/learn-kafka/ksqldb/intro/" rel="noopener noreferrer"&gt;Inside ksqlDB&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Command Line and Graphical interfaces
&lt;/h4&gt;

&lt;p&gt;There are many ways to work with Kafka interactively, ranging from the shell scripts that come with Kafka to extensive graphical user interfaces. There isn’t space here to cover them all, but here are a few that are worth checking out.&lt;/p&gt;

&lt;h5&gt;
  
  
  Command Line
&lt;/h5&gt;

&lt;p&gt;Confluent CLI - &lt;a href="https://docs.confluent.io/confluent-cli/current/overview.html" rel="noopener noreferrer"&gt;https://docs.confluent.io/confluent-cli/current/overview.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;kcat (formerly KafkaCat) - &lt;a href="https://github.com/edenhill/kcat" rel="noopener noreferrer"&gt;https://github.com/edenhill/kcat&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;kcctl (CLI for Kafka Connect) - &lt;a href="https://github.com/kcctl/kcctl" rel="noopener noreferrer"&gt;https://github.com/kcctl/kcctl&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  GUI
&lt;/h5&gt;

&lt;p&gt;Confluent Control Center - &lt;a href="https://docs.confluent.io/platform/current/control-center/index.html" rel="noopener noreferrer"&gt;https://docs.confluent.io/platform/current/control-center/index.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conduktor - &lt;a href="https://www.conduktor.io" rel="noopener noreferrer"&gt;https://www.conduktor.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kafdrop - &lt;a href="https://github.com/obsidiandynamics/kafdrop" rel="noopener noreferrer"&gt;https://github.com/obsidiandynamics/kafdrop&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;akHQ - &lt;a href="https://github.com/tchiotludo/akhq" rel="noopener noreferrer"&gt;https://github.com/tchiotludo/akhq&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Strategy
&lt;/h3&gt;

&lt;p&gt;That’s a lot to learn, and it might seem overwhelming, so here’s some advice for how to build your knowledge without burning out. &lt;/p&gt;

&lt;h4&gt;
  
  
  Warming up
&lt;/h4&gt;

&lt;p&gt;Before jumping into building Kafka applications, depending on your level of experience, it will be helpful to warm up with some introductory material, for example, the courses on Confluent Developer, such as &lt;a href="https://developer.confluent.io/learn-kafka/apache-kafka/events/" rel="noopener noreferrer"&gt;Kafka 101&lt;/a&gt;, &lt;a href="https://developer.confluent.io/learn-kafka/kafka-connect/intro/" rel="noopener noreferrer"&gt;Kafka Connect 101&lt;/a&gt;, and &lt;a href="https://developer.confluent.io/learn-kafka/schema-registry/key-concepts/" rel="noopener noreferrer"&gt;Schema Registry 101&lt;/a&gt;. These courses have both video and text content that provide you with a gentle introduction to these technologies. They even include exercises that allow you to get hands-on. &lt;/p&gt;

&lt;h4&gt;
  
  
  Working out
&lt;/h4&gt;

&lt;p&gt;A great next step is to get more hands-on experience with Kafka using quick-start guides and tutorials, where a problem is presented, and you can use Kafka and your favorite language to solve it. Confluent Developer provides plenty of these. Most of the tutorials are based on &lt;a href="https://developer.confluent.io/tutorials/" rel="noopener noreferrer"&gt;Java&lt;/a&gt;, but there are also getting-started exercises available for &lt;a href="https://developer.confluent.io/get-started/python/" rel="noopener noreferrer"&gt;Python&lt;/a&gt;, &lt;a href="https://developer.confluent.io/get-started/dotnet/" rel="noopener noreferrer"&gt;.Net&lt;/a&gt;, &lt;a href="https://developer.confluent.io/get-started/go/" rel="noopener noreferrer"&gt;Go&lt;/a&gt;, and other languages.&lt;/p&gt;

&lt;h4&gt;
  
  
  Building a project
&lt;/h4&gt;

&lt;p&gt;Now that you’ve got some experience working with and solving problems with Kafka, you can continue and accelerate your learning journey by building a project. It can be something for work, a side project, or even a cool demo idea that you can show off at a meetup (more about that in the next section). But whatever project you choose, it should be end-to-end so that you get the broadest possible experience from it. &lt;/p&gt;

&lt;p&gt;The reason that a complete project is so much more helpful than exercises or tutorials is that the problems you are trying to solve are your own. This will provide a much stronger context for what you are learning. Context is like a hook on which to hang knowledge. Things learned without context tend to fade quickly. Just ask someone who’s been cramming for an exam. &lt;/p&gt;

&lt;p&gt;Another benefit of building a project with Kafka is that you can use it to show what you know by hosting your project in a GitHub repository. One of the advantages that we have in the technology space is that it is much easier to show prospective employers what we are capable of, and one of the great ways to do this is via source code repositories.&lt;/p&gt;

&lt;h4&gt;
  
  
  Contributing to Apache Kafka
&lt;/h4&gt;

&lt;p&gt;There are many opportunities for involvement in the open source Apache Kafka project. This is a great way to learn as well as to put your learning into practice. Check out the official &lt;a href="https://kafka.apache.org/contributing" rel="noopener noreferrer"&gt;contributors' guide&lt;/a&gt; or watch this &lt;a href="https://www.confluent.io/resources/kafka-summit-2020/getting-started-with-apache-kafka-a-contributors-journey/" rel="noopener noreferrer"&gt;video&lt;/a&gt; from Kafka Summit 2020 for more information and inspiration.&lt;/p&gt;

&lt;h4&gt;
  
  
  Certification
&lt;/h4&gt;

&lt;p&gt;Another way to demonstrate your knowledge is certification. While not a silver bullet, when used in conjunction with examples of actual code you’ve written, certification can provide an extra level of comfort to prospective employers. In truth, the more valuable aspect of getting certified is the incentive it provides for learning. Confluent &lt;a href="https://www.confluent.io/certification" rel="noopener noreferrer"&gt;provides certificates&lt;/a&gt; for both developers and administrators, along with suggestions for how to prepare for the exams.  &lt;/p&gt;

&lt;h4&gt;
  
  
  Show your work
&lt;/h4&gt;

&lt;p&gt;Chronicling your learning journey by way of a blog can help you to learn faster by better organizing your thoughts. It can also help you to get valuable feedback from those who read it. And it’s a great way to show prospective employers what you’ve learned and your ability to communicate it. &lt;/p&gt;

&lt;h3&gt;
  
  
  Don’t go it alone
&lt;/h3&gt;

&lt;p&gt;As you go about building your first project or preparing for your certification exam, you will undoubtedly run into questions. You can find some good answers on Google or Stackoverflow, but there is a large, active, and helpful community of developers, administrators, and just all-around good people in the Kafka community.&lt;/p&gt;

&lt;p&gt;Getting involved in the community will make your learning journey more enjoyable and more productive. It will also provide you with invaluable networking opportunities that can lead to your first job as a Kafka developer. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://forum.confluent.io" rel="noopener noreferrer"&gt;Confluent Developer Forum&lt;/a&gt; and &lt;a href="https://www.confluent.io/community/ask-the-community/" rel="noopener noreferrer"&gt;Confluent Community Slack&lt;/a&gt; are two great places to introduce yourself to the community, ask questions, and learn from reading others' questions and answers. You can also subscribe to the &lt;a href="https://kafka.apache.org/contact" rel="noopener noreferrer"&gt;Apache Kafka developer and user mailing lists&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But the community is about more than just learning. It’s also about helping, inspiring, collaborating, and building each other up. So, don’t just lurk, or post a question when you’re stuck. Get involved. Try to answer some questions, tell us about what you are working on, or the cool new thing you learned. &lt;/p&gt;

&lt;p&gt;Besides the online text venues, another great way to get to know people in the community is at meetups and conferences. The &lt;a href="https://events.confluent.io/meetups" rel="noopener noreferrer"&gt;Confluent meetup hub&lt;/a&gt; will show you what meetups are coming up. There may be one in your area, or you can join one of the many online meetups from anywhere. As you learn, you may even consider presenting at a meetup. Most of these are recorded, and a link to your presentation could be a great addition to your CV. &lt;/p&gt;

&lt;p&gt;Not only do members of the community help each other with technical issues, they often have valuable career advice. Here are some tips from a few &lt;a href="https://www.confluent.io/nominate/" rel="noopener noreferrer"&gt;Community Catalysts&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/OlenaKutsenko" rel="noopener noreferrer"&gt;Olena Kutsenko&lt;/a&gt; is a Senior Developer Advocate. &lt;/p&gt;

&lt;p&gt;“Starting to work with a new technology is often tough, especially if it is as complex as Apache Kafka. Here is a list of things that I'd recommend to those who want to become an Apache Kafka developer:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Don't be afraid to ask for help. The community around Apache Kafka is knowledgeable and very friendly. So if after reading the docs and trying different approaches, you're still struggling to find a working solution for your task, don't hesitate to ask a question. Who knows, maybe the perfect solution is not so far from what you have! &lt;/li&gt;
&lt;li&gt;Don't get frustrated if the learning process is not as quick as you hoped. The Apache Kafka ecosystem is very wide and it is normal that a single person won't know all the nuances of the system. In fact, it is better to accept the fact that the learning process is never over! That's why it is so vital for us to share knowledge with each other (See the next point ;) ).&lt;/li&gt;
&lt;li&gt;Share what you learned with others. This is the best way to solidify your knowledge and get different perspectives.”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://twitter.com/zychr" rel="noopener noreferrer"&gt;Robert Zych&lt;/a&gt; is a Data Platform Engineer. &lt;/p&gt;

&lt;p&gt;“My 1st tip for anyone (regardless of programming background) interested in using Kafka is to understand when it should and shouldn’t be used. My 2nd tip would be to get/develop experience with a JVM language such as Kotlin, Java, or Scala (in that order). My 3rd tip would be to take courses and CCDAK (Confluent Certified Developer for Apache Kafka). And my 4th tip would be to experiment and build something with Kafka (preferably with helpful friends like you and &lt;a href="https://twitter.com/nbuesing" rel="noopener noreferrer"&gt;Neil&lt;/a&gt; 🙂)”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://twitter.com/nbuesing" rel="noopener noreferrer"&gt;Neil Buesing&lt;/a&gt; is a Kafka and Kafka Streams expert. He shares a few tips from the perspective of a hiring manager.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;“Show interest in Kafka’s improvements (e.g. up to date on some KIPs) seeing someone being current sets them apart from others in that they can be seen as someone that  “enjoys Kafka” vs just using Kafka. &lt;/li&gt;
&lt;li&gt;Operational Knowledge - I don’t need a developer to know how to manage a cluster, but understanding operational aspects is helpful; e.g. knowing that a compacted topic and how it gets compacted is useful; knowing performance issues when it comes to “commitSync()” after every reading a message — I was at a client that did a commitSync() after EVERY consumed message.  This is a sign that Kafka is not well understood.&lt;/li&gt;
&lt;li&gt;If you use an additional framework be sure you know what is part of that framework vs what is Apache Kafka (e.g. KafkaTemplate is Spring Kafka not Kafka).  — ok, maybe this is just a pet peeve of mine.”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Along with the Confluent forum and slack group, you can also find many members of the Kafka community on Twitter, LinkedIn, and other social media outlets. Not only can you find help, inspiration, and potential job leads, but you can also make some great friends. This, far from complete, &lt;a href="https://twitter.com/i/lists/1222519463532285954" rel="noopener noreferrer"&gt;Twitter list&lt;/a&gt; can help you get started meeting some amazing people.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practical Tips
&lt;/h3&gt;

&lt;p&gt;When it comes to the actual process of looking for a job, there’s no substitute for good old-fashioned shoe leather. The more doors you knock on, the better your chances. I know that most shoes are soled with rubber these days, and you rarely walk or knock on doors when applying for jobs, but you get my meaning. Apply for as many opportunities as you can find. Even the ones that don’t turn out to be what you were looking for will provide you with valuable practice. Job hunting, and more specifically, interviewing, is a skill, and like all skills, it requires practice. &lt;/p&gt;

&lt;p&gt;Another important point about interviews is to take careful notes of any technical questions that stumped you. Research these areas and, if possible, include what you learn in one of your projects. Remember the value of context. &lt;/p&gt;

&lt;p&gt;As far as where to look, aside from your personal network, &lt;a href="https://www.indeed.com/jobs?q=kafka&amp;amp;l=&amp;amp;from=searchOnHP&amp;amp;vjk=2ebdc72636392d9b" rel="noopener noreferrer"&gt;Indeed.com&lt;/a&gt; is a good place to start. As of the time of this writing, they are showing over 14,000 job openings that include Kafka as a desired skill. Though not quite as large of a database, &lt;a href="https://www.dice.com/" rel="noopener noreferrer"&gt;Dice.com&lt;/a&gt; has also yielded good results for me in the past. Both of these sites allow you to post your CV and specify the types of opportunities you are looking for but don’t stop at that. Take the initiative to reach out to recruiters and potential employers. &lt;/p&gt;

&lt;h3&gt;
  
  
  Enjoy the journey
&lt;/h3&gt;

&lt;p&gt;We’ve talked about a lot of things that you can do, and it may seem a bit overwhelming, but don’t look at it as a checklist, but rather a menu of opportunities for a fun adventure. Learning, experimenting, networking, blogging, presenting, and even interviewing can all be a lot of fun. There will be problems to solve, challenges to overcome, people to meet, and milestones to pass along the way. So enjoy the journey, and keep us posted. We’re rooting for you!&lt;/p&gt;

</description>
      <category>cryptocurrency</category>
      <category>crypto</category>
      <category>offers</category>
    </item>
    <item>
      <title>Helpful Tools for Apache Kafka Developers</title>
      <dc:creator>Dave Klein</dc:creator>
      <pubDate>Thu, 08 Jul 2021 19:59:42 +0000</pubDate>
      <link>https://dev.to/daveklein/helpful-tools-for-apache-kafka-developers-1l89</link>
      <guid>https://dev.to/daveklein/helpful-tools-for-apache-kafka-developers-1l89</guid>
      <description>&lt;p&gt;&lt;em&gt;This blog post was originally published on the &lt;/em&gt;&lt;a href="https://www.confluent.io/blog/best-kafka-tools-that-boost-developer-productivity/?utm_source=dev&amp;amp;utm_medium=blogpost&amp;amp;utm_campaign=tm.devx_ch.bp-kafka-tools-that-boost-developer-productivity_content.apache-kafka"&gt;&lt;em&gt;Confluent blog&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Apache Kafka&lt;sup&gt;®&lt;/sup&gt; is at the core of a large ecosystem that includes powerful components, such as Kafka Connect and Kafka Streams. This ecosystem also includes many tools and utilities that make us, as Kafka developers, more productive while making our jobs easier and more enjoyable. Below, we'll take a look at a few of these tools and how they can help us get work done.&lt;/p&gt;

&lt;h2&gt;kafkacat&lt;/h2&gt;

&lt;p&gt;We like to save the best for last, but this tool is too good to wait. So, we'll start off by covering kafkacat.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.confluent.io/platform/current/app-development/kafkacat-usage.html?_ga=2.221456093.492995071.1624902453-363840567.1611349348&amp;amp;utm_source=dev&amp;amp;utm_medium=blogpost&amp;amp;utm_campaign=tm.devx_ch.bp-kafka-tools-that-boost-developer-productivity_content.apache-kafka" rel="noopener noreferrer"&gt;kafkacat&lt;/a&gt; is a fast and flexible command line Kafka producer, consumer, and more. Magnus Edenhill, the author of the librdkafka C/C++ library for Kafka, developed it. kafkacat is great for quickly producing and consuming data to and from a topic. In fact, the same command will do both, depending on the context. Check this out:&lt;/p&gt;
&lt;pre&gt;$  ~ echo "Hello World" | kafkacat -b localhost:29092 -t hello-topic
% Auto-selecting Producer mode (use -P or -C to override)&lt;/pre&gt;
&lt;p&gt;We’ve sent data to &lt;code&gt;stdout&lt;/code&gt; with echo and piped it to kafkacat. We only needed two simple flags: &lt;code&gt;-b&lt;/code&gt; for the broker and &lt;code&gt;-t&lt;/code&gt; for the topic. kafkacat realizes that we are sending it data and switches into producer mode. Now, we can read that data with the exact same kafkacat command:&lt;/p&gt;
&lt;pre&gt;$  ~ kafkacat -b localhost:29092 -t hello-topic
% Auto-selecting Consumer mode (use -P or -C to override)
Hello World
% Reached end of topic hello-topic [0] at offset 1&lt;/pre&gt;
&lt;p&gt;If we want to send a record with a key, we just need to use a delimiter and tell kafkacat what it is with the &lt;code&gt;-K&lt;/code&gt; flag. In this case, we'll use a colon:&lt;/p&gt;
&lt;pre&gt;$  ~ echo "123:Jane Smith" | kafkacat -b localhost:29092 -t customers -K:
% Auto-selecting Producer mode (use -P or -C to override)&lt;/pre&gt;
&lt;p&gt;Again, the same kafkacat command will read the record from the topic:&lt;/p&gt;
&lt;pre&gt;$  ~ kafkacat -b localhost:29092 -t customers -K:
% Auto-selecting Consumer mode (use -P or -C to override)
123:Jane Smith
% Reached end of topic customers [0] at offset 1&lt;/pre&gt;
&lt;p&gt;Alternatively, we can leave the &lt;code&gt;-K&lt;/code&gt; flag off when reading, if we only want the value:&lt;/p&gt;
&lt;pre&gt;$  ~ kafkacat -b localhost:29092 -t customers
% Auto-selecting Consumer mode (use -P or -C to override)
Jane Smith
% Reached end of topic customers [0] at offset 1&lt;/pre&gt;
&lt;p&gt;Note that piping data from &lt;code&gt;stdout&lt;/code&gt; to kafkacat, as we did above, will spin up a producer, send the data, and then shut the producer down. To start a producer and leave it running to continue sending data, use the &lt;code&gt;-P&lt;/code&gt; flag, as suggested by the &lt;code&gt;auto-selecting&lt;/code&gt; message above. &lt;/p&gt;

&lt;p&gt;The consumer will stay running just as the &lt;code&gt;kafka-console-consumer&lt;/code&gt; would. In order to consume from a topic and immediately exit, we can use the &lt;code&gt;-e&lt;/code&gt; flag. &lt;/p&gt;

&lt;p&gt;To consume data that is in &lt;code&gt;Avro&lt;/code&gt; format, we can use the &lt;code&gt;-s&lt;/code&gt; flag. This flag can be used for the whole record &lt;code&gt;-s avro&lt;/code&gt;, for just the key &lt;code&gt;-s key=avro&lt;/code&gt;, or just the value &lt;code&gt;-s value=avro&lt;/code&gt;. Here's an example using the &lt;code&gt;movies&lt;/code&gt; topic from the popular &lt;a href="https://kafka-tutorials.confluent.io/join-a-stream-to-a-table/kstreams.html?_ga=2.191112175.492995071.1624902453-363840567.1611349348&amp;amp;utm_source=dev&amp;amp;utm_medium=blogpost&amp;amp;utm_campaign=tm.devx_ch.bp-kafka-tools-that-boost-developer-productivity_content.apache-kafka"&gt;movie rating tutorial&lt;/a&gt;:&lt;/p&gt;
&lt;pre&gt;$  ~ kafkacat -C -b localhost:29092 -t movies -s value=avro -r http://localhost:8081
------------------------------------------------------
{"id": 294, "title": "Die Hard", "release_year": 1988}
{"id": 354, "title": "Tree of Life", "release_year": 2011}
{"id": 782, "title": "A Walk in the Clouds", "release_year": 1995}
{"id": 128, "title": "The Big Lebowski", "release_year": 1998}
{"id": 780, "title": "Super Mario Bros.", "release_year": 1993}&lt;/pre&gt;
&lt;p&gt;There is a lot of power packed into this little tool, and there are many other flags that can be used with it. Running &lt;code&gt;kafkacat -h&lt;/code&gt; will provide the complete list. For more great examples of kafkacat in action, &lt;a href="https://rmoff.net/categories/kafkacat/" rel="noopener noreferrer"&gt;check out related posts&lt;/a&gt; on Robin Moffatt’s blog. One piece missing from kafkacat is the &lt;a href="https://github.com/edenhill/kafkacat/issues/226" rel="noopener noreferrer"&gt;ability to produce data in &lt;code&gt;Avro&lt;/code&gt; format&lt;/a&gt;. As we saw, we can consume &lt;code&gt;Avro&lt;/code&gt; with kafkacat using the Confluent Schema Registry, but we can't produce it. This leads us to our next tool.&lt;/p&gt;

&lt;h2&gt;Confluent REST Proxy&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.confluent.io/platform/current/kafka-rest/index.html?_ga=2.191112175.492995071.1624902453-363840567.1611349348&amp;amp;utm_source=dev&amp;amp;utm_medium=blogpost&amp;amp;utm_campaign=tm.devx_ch.bp-kafka-tools-that-boost-developer-productivity_content.apache-kafka"&gt;Confluent REST Proxy&lt;/a&gt; is a feature-rich HTTP Kafka client. It can be used to provide Kafka support to applications written in a language without a native Kafka client. This is probably its most common use, but it can also be a handy developer tool. It can easily produce &lt;code&gt;Avro&lt;/code&gt; data to a Kafka topic, as shown here:&lt;/p&gt;
&lt;pre&gt;$ ~ curl -X POST \
-H "Content-Type: application/vnd.kafka.avro.v2+json" \
-H "Accept: application/vnd.kafka.v2+json" \
--data @newMovieData.json "http://localhost:8082/topics/movies"
------------------------------------------------------------------------------
{"offsets":[{"partition":0,"offset":5,"error_code":null,"error":null}],"key_schema_id":null,"value_schema_id":3}&lt;/pre&gt;
&lt;p&gt;REST Proxy is part of the Confluent Platform under the Confluent Community License, but it can be used on its own with any Kafka cluster. It can do a lot more than what we’ll cover here, as you can &lt;a href="https://docs.confluent.io/platform/current/kafka-rest/index.html?_ga=2.191112175.492995071.1624902453-363840567.1611349348&amp;amp;utm_source=dev&amp;amp;utm_medium=blogpost&amp;amp;utm_campaign=tm.devx_ch.bp-kafka-tools-that-boost-developer-productivity_content.apache-kafka"&gt;see from the docs&lt;/a&gt;. As shown above, REST Proxy can be used from the command line with &lt;code&gt;curl&lt;/code&gt; or something similar. It can also be used with tools such as Postman to build a user-friendly Kafka UI. Here's an example of producing to a topic with Postman (the &lt;code&gt;Content-Type&lt;/code&gt; and &lt;code&gt;Accept&lt;/code&gt; headers were set under the “Headers” tab): &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G_KoGJnx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.confluent.io/wp-content/uploads/producing-to-a-topic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G_KoGJnx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.confluent.io/wp-content/uploads/producing-to-a-topic.png" alt="Producing to a topic with Postman" width="880" height="410"&gt;&lt;/a&gt; As we can see from both the &lt;code&gt;curl&lt;/code&gt; and Postman versions, REST Proxy does require that the schema for &lt;code&gt;Avro&lt;/code&gt; messages be passed in with each produce request. A tool like Postman, which allows you to build up a library of saved queries, can make this easier to manage. To consume from topics with REST Proxy, we first create a consumer in a consumer group, then subscribe to a topic or topics, and finally fetch records to our heart’s content. We'll switch back to &lt;code&gt;curl&lt;/code&gt; so that we can see all the necessary bits at once. First, we &lt;code&gt;POST&lt;/code&gt; to the consumer’s endpoint with our consumer group name. In this &lt;code&gt;POST&lt;/code&gt; request, we will pass a name for our new consumer instance, the internal data format (in this case, &lt;code&gt;Avro&lt;/code&gt;), and the &lt;code&gt;auto.offset.reset&lt;/code&gt; value.&lt;/p&gt;
&lt;pre&gt;$ ~ curl -X POST  -H "Content-Type: application/vnd.kafka.v2+json" \
      --data '{"name": "movie_consumer_instance", "format": "avro", "auto.offset.reset": "earliest"}' \
      http://localhost:8082/consumers/movie_consumers
------------------------------------------------------------------------------
{"instance_id":"movie_consumer_instance","base_uri":"http://localhost:8082/consumers/movie_consumers/instances/movie_consumer_instance"}&lt;/pre&gt;
&lt;p&gt;This will return the &lt;code&gt;instance id&lt;/code&gt; and base &lt;code&gt;URI&lt;/code&gt; of the newly created consumer instance. Next, we'll use that &lt;code&gt;URI&lt;/code&gt; to subscribe to a topic with a &lt;code&gt;POST&lt;/code&gt; to the &lt;code&gt;subscription&lt;/code&gt; endpoint.&lt;/p&gt;
&lt;pre&gt;$  ~ curl -X POST -H "Content-Type: application/vnd.kafka.v2+json" --data '{"topics":["movies"]}' \      http://localhost:8082/consumers/movie_consumers/instances/movie_consumer_instance/subscription&lt;/pre&gt;
&lt;p&gt;This doesn't return anything, but should get a &lt;code&gt;204&lt;/code&gt; response. Now we can use a &lt;code&gt;GET&lt;/code&gt; request to the &lt;code&gt;records&lt;/code&gt; endpoint of that same &lt;code&gt;URI&lt;/code&gt; to fetch records.&lt;/p&gt;
&lt;pre&gt;$  ~ curl -X GET -H "Accept: application/vnd.kafka.avro.v2+json" \      http://localhost:8082/consumers/movie_consumers/instances/movie_consumer_instance/records
----------------------------------------------------------------------
[{"topic":"movies","key":null,"value":{"id": 294, "title": "Die Hard", "release_year": 1988},"partition":0,"offset":0},
{"topic":"movies","key":null,"value":{"id": 354, "title": "Tree of Life", "release_year": 2011},"partition":0,"offset":1},{"topic":"movies","key":null,"value":{"id": 782, "title": "A Walk in the Clouds", "release_year": 1995},"partition":0,"offset":2},{"topic":"movies","key":null,"value":{"id": 128, "title": "The Big Lebowski", "release_year": 1998},"partition":0,"offset":3},{"topic":"movies","key":null,"value":{"id": 780, "title": "Super Mario Bros.", "release_year": 1993},"partition":0,"offset":4},{"topic":"movies","key":null,"value":{"id":101,"title":"Chariots of Fire","release_year":1981},"partition":0,"offset":5}]&lt;/pre&gt;
&lt;p&gt;The consumer that we created will remain, and we can make the same &lt;code&gt;GET&lt;/code&gt; request anytime to check for new data. If we no longer need this consumer, we can &lt;code&gt;DELETE&lt;/code&gt; it using the base &lt;code&gt;URI&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;$  ~ curl -X DELETE -H "Content-Type: application/vnd.kafka.v2+json" \    http://localhost:8082/consumers/movie_consumers/instances/movie_consumer_instance&lt;/pre&gt;
&lt;p&gt;We can also get information about &lt;code&gt;brokers&lt;/code&gt;, &lt;code&gt;topics&lt;/code&gt;, and &lt;code&gt;partitions&lt;/code&gt; with simple &lt;code&gt;GET&lt;/code&gt; requests.&lt;/p&gt;
&lt;pre&gt;$  ~ curl "http://localhost:8082/brokers"&lt;/pre&gt;
&lt;pre&gt;$  ~ curl "http://localhost:8082/topics"&lt;/pre&gt;
&lt;pre&gt;$  ~ curl "http://localhost:8082/topics/movies"&lt;/pre&gt;
&lt;pre&gt;$  ~ curl "http://localhost:8082/topics/movies/partitions"&lt;/pre&gt;
&lt;p&gt;These requests can return quite a bit of &lt;code&gt;JSON&lt;/code&gt; data, which we'll leave off for the sake of space. However, this does lead us nicely to our next tool.&lt;/p&gt;

&lt;h2&gt;
&lt;code&gt;jq&lt;/code&gt;: A command line processor for JSON&lt;/h2&gt;

&lt;p&gt;Though not specific to Kafka, &lt;code&gt;jq&lt;/code&gt; is an incredibly helpful tool when working with other command line utilities that return &lt;code&gt;JSON&lt;/code&gt; data. &lt;code&gt;jq&lt;/code&gt; is a command line utility that allows us to format, manipulate, and extract data from the &lt;code&gt;JSON&lt;/code&gt; output of other programs. Instructions for downloading and installing &lt;code&gt;jq&lt;/code&gt; can be found on &lt;a href="https://stedolan.github.io/jq/download" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, along with links to tutorials and other resources. &lt;/p&gt;

&lt;p&gt;Let's go back and take a look at the REST Proxy output from the &lt;code&gt;GET&lt;/code&gt; call to our consumer above. It's not the largest blob of &lt;code&gt;JSON&lt;/code&gt; out there, but it’s still a bit hard to read. Let's try again, this time piping the output to &lt;code&gt;jq&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;$  ~ curl -X GET -H "Accept: application/vnd.kafka.avro.v2+json" \
      http://localhost:8082/consumers/movie_consumers/instances/movie_consumer_instance/records | jq

[
  {
    "topic": "movies",
    "key": null,
    "value": {
      "id": 294,
      "title": "Die Hard",
      "release_year": 1988
    },
    "partition": 0,
    "offset": 0
  },
  {
    "topic": "movies",
    "key": null,
    "value": {
      "id": 354,
      "title": "Tree of Life",
      "release_year": 2011
    },
    "partition": 0,
    "offset": 1
  },
  {
    "topic": "movies",
    "key": null,
    "value": {
      "id": 782,
      "title": "A Walk in the Clouds",
      "release_year": 1995
    },
    "partition": 0,
    "offset": 2
  },
  {
    "topic": "movies",
    "key": null,
    "value": {
      "id": 128,
      "title": "The Big Lebowski",
      "release_year": 1998
    },
    "partition": 0,
    "offset": 3
  },
  {
    "topic": "movies",
    "key": null,
    "value": {
      "id": 780,
      "title": "Super Mario Bros.",
      "release_year": 1993
    },
    "partition": 0,
    "offset": 4
  },
  {
    "topic": "movies",
    "key": null,
    "value": {
      "id": 101,
      "title": "Chariots of Fire",
      "release_year": 1981
    },
    "partition": 0,
    "offset": 5
  }
]&lt;/pre&gt;
&lt;p&gt;It’s much easier to read now but still a bit noisy. Let's say we only want the movie titles and their release years. We can do that easily with &lt;code&gt;jq&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;$  ~ curl -X GET -H "Accept: application/vnd.kafka.avro.v2+json" \
      http://localhost:8082/consumers/movie_consumers/instances/movie_consumer_instance/records | jq \
      | jq '.[] | {title: .value.title, year: .value.release_year}'

{
  "title": "Die Hard",
  "year": 1988
}
{
  "title": "Tree of Life",
  "year": 2011
}
{
  "title": "A Walk in the Clouds",
  "year": 1995
}
{
  "title": "The Big Lebowski",
  "year": 1998
}
{
  "title": "Super Mario Bros.",
  "year": 1993
}
{
  "title": "Chariots of Fire",
  "year": 1981
}&lt;/pre&gt;
&lt;p&gt;Let's take a look at what we just did (and you can &lt;a href="https://jqplay.org/s/FZiB3AoAeo" rel="noopener noreferrer"&gt;follow along with the live example at jqplay&lt;/a&gt;):&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;We piped the output from the REST Proxy to &lt;code&gt;jq&lt;/code&gt;. The bit between the single quotes is a &lt;code&gt;jq&lt;/code&gt; program with two steps. &lt;code&gt;jq&lt;/code&gt; uses the same pipe character to pass the output of one step to the input of another.&lt;/li&gt;
    &lt;li&gt;In our example, the first step in &lt;code&gt;jq&lt;/code&gt; is an iterator, which will read each movie record from the array and pass it to the next step.&lt;/li&gt;
    &lt;li&gt;The second step in &lt;code&gt;jq&lt;/code&gt; creates a new &lt;code&gt;JSON&lt;/code&gt; object from each record. The keys are arbitrary, but the values are derived from the input using &lt;code&gt;jq&lt;/code&gt;'s &lt;code&gt;identity&lt;/code&gt; operator, &lt;code&gt;'.'&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Pretty cool, huh? There is much more that can be done with &lt;code&gt;jq&lt;/code&gt;, and you can &lt;a href="https://stedolan.github.io/jq/manual" rel="noopener noreferrer"&gt;read all about it in the documentation&lt;/a&gt;. The way that &lt;code&gt;jq&lt;/code&gt; operates on a stream of &lt;code&gt;JSON&lt;/code&gt; data, allowing us to combine different operations in order to achieve our desired results, reminds me of Kafka Streams—bringing us to our final tool.&lt;/p&gt;

&lt;h2&gt;Kafka Streams Topology Visualizer&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://zz85.github.io/kafka-streams-viz/" rel="noopener noreferrer"&gt;Kafka Streams Topology Visualizer&lt;/a&gt; takes the text description of a Kafka Streams topology and produces a graphic representation showing input topics, processing nodes, interim topics, state stores, etc. It's a great way to get a big-picture view of a complex Kafka Streams topology. The topology for the movie ratings tutorial is not all that complex, but it will serve nicely to demonstrate this tool. &lt;/p&gt;

&lt;p&gt;Here's the text of our topology, which we captured with the &lt;code&gt;Topology::describe&lt;/code&gt; method:&lt;/p&gt;
&lt;pre&gt;Topologies:
   Sub-topology: 0
    Source: KSTREAM-SOURCE-0000000000 (topics: [movies])
      --&amp;gt; KSTREAM-MAP-0000000001
    Processor: KSTREAM-MAP-0000000001 (stores: [])
      --&amp;gt; KSTREAM-SINK-0000000002
      &amp;lt;-- KSTREAM-SOURCE-0000000000
    Sink: KSTREAM-SINK-0000000002 (topic: rekeyed-movies)
      &amp;lt;-- KSTREAM-MAP-0000000001 Sub-topology: 1 Source: KSTREAM-SOURCE-0000000010 (topics: [KSTREAM-MAP-0000000007-repartition]) --&amp;gt; KSTREAM-JOIN-0000000011
    Processor: KSTREAM-JOIN-0000000011 (stores: [rekeyed-movies-STATE-STORE-0000000003])
      --&amp;gt; KSTREAM-SINK-0000000012
      &amp;lt;-- KSTREAM-SOURCE-0000000010 Source: KSTREAM-SOURCE-0000000004 (topics: [rekeyed-movies]) --&amp;gt; KTABLE-SOURCE-0000000005
    Sink: KSTREAM-SINK-0000000012 (topic: rated-movies)
      &amp;lt;-- KSTREAM-JOIN-0000000011 Processor: KTABLE-SOURCE-0000000005 (stores: [rekeyed-movies-STATE-STORE-0000000003]) --&amp;gt; none
      &amp;lt;-- KSTREAM-SOURCE-0000000004 Sub-topology: 2 Source: KSTREAM-SOURCE-0000000006 (topics: [ratings]) --&amp;gt; KSTREAM-MAP-0000000007
    Processor: KSTREAM-MAP-0000000007 (stores: [])
      --&amp;gt; KSTREAM-FILTER-0000000009
      &amp;lt;-- KSTREAM-SOURCE-0000000006 Processor: KSTREAM-FILTER-0000000009 (stores: []) --&amp;gt; KSTREAM-SINK-0000000008
      &amp;lt;-- KSTREAM-MAP-0000000007
    Sink: KSTREAM-SINK-0000000008 (topic: KSTREAM-MAP-0000000007-repartition)
      &amp;lt;-- KSTREAM-FILTER-0000000009&lt;/pre&gt;
&lt;p&gt;You may be adept at reading this kind of output, but most people will find a graphical representation very helpful: &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--86L2vOhZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.confluent.io/wp-content/uploads/kafka-streams-topology-visualizer-e1611097800555.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--86L2vOhZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.confluent.io/wp-content/uploads/kafka-streams-topology-visualizer-e1611097800555.png" alt="Kafka Streams Topology Visualizer" width="493" height="1424"&gt;&lt;/a&gt; The Kafka Streams Topology Visualizer is a web app that you can host yourself (the source is available on &lt;a href="https://github.com/zz85/kafka-streams-viz" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;). For occasional use, the &lt;a href="https://zz85.github.io/kafka-streams-viz" rel="noopener noreferrer"&gt;public hosted version&lt;/a&gt; is probably sufficient. &lt;/p&gt;

&lt;p&gt;A complex topology might be difficult to view all at once, so you can also visualize sub-topologies and then combine the images in a way that is easier to view. This can be a huge help in bringing new developers up to speed on an existing Kafka Streams application.&lt;/p&gt;

&lt;h3&gt;ksqlDB topologies&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.confluent.io/product/ksql?utm_source=dev&amp;amp;utm_medium=blogpost&amp;amp;utm_campaign=tm.devx_ch.bp-kafka-tools-that-boost-developer-productivity_content.apache-kafka"&gt;ksqlDB&lt;/a&gt; is an event streaming database that allows us to build complex topologies using syntax familiar to any SQL developer. Since ksqlDB is built on top of Kafka Streams, the Kafka Streams Topology Visualizer also works on these types of topologies. &lt;/p&gt;

&lt;p&gt;We can get the topology description from ksqlDB with the &lt;code&gt;EXPLAIN&lt;/code&gt; command. First, find the executing query:&lt;/p&gt;
&lt;pre&gt;ksql&amp;gt; SHOW QUERIES;

 Query ID              | Query Type | Status    | Sink Name      | Sink Kafka Topic | Query String
--------------------------------------------------------------------------------------------------------------------------
 CSAS_SHIPPED_ORDERS_0 | PERSISTENT | RUNNING:1 | SHIPPED_ORDERS | SHIPPED_ORDERS   | CREATE STREAM SHIPPED_ORDERS WITH 
 ...&lt;/pre&gt;
&lt;p&gt;Now we can use that generated query name, &lt;code&gt;CSAS_SHIPPED_ORDERS_0&lt;/code&gt;, to get the topology:&lt;/p&gt;
&lt;pre&gt;ksql&amp;gt; EXPLAIN CSAS_SHIPPED_ORDERS_0;&lt;/pre&gt;
&lt;p&gt;This gives us a fair amount of output, so we won't show it all here, but toward the end, we see the topology description. Copying and pasting it into the visualizer results in this diagram: &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Vxmd5pxc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.confluent.io/wp-content/uploads/ksqldb-topologies-e1611098767178.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Vxmd5pxc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn.confluent.io/wp-content/uploads/ksqldb-topologies-e1611098767178.png" alt="Kafka Streams Topology Visualizer | ksqlDB topology" width="585" height="1669"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Tip of the iceberg&lt;/h2&gt;

&lt;p&gt;We’ve looked at four helpful tools for Apache Kafka developers, but there are many others out there, which is one of the benefits of working in such a vibrant community. If there is a command line tool or a graphical application that you find helpful in getting the most out of Kafka, tell others about it. The &lt;a href="https://forum.confluent.io/?_ga=2.220213213.492995071.1624902453-363840567.1611349348&amp;amp;utm_source=dev&amp;amp;utm_medium=blogpost&amp;amp;utm_campaign=tm.devx_ch.bp-kafka-tools-that-boost-developer-productivity_content.apache-kafka"&gt;Confluent Community Forum&lt;/a&gt; is a great place to share this kind of information. We look forward to continuing this discussion with you there!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>apachekafka</category>
      <category>kafka</category>
      <category>eventstreaming</category>
    </item>
  </channel>
</rss>
