<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ricardo Medeiros</title>
    <description>The latest articles on DEV Community by Ricardo Medeiros (@jjackbauer).</description>
    <link>https://dev.to/jjackbauer</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jjackbauer"/>
    <language>en</language>
    <item>
      <title>You Probably Understood the Client Server Model Wrong</title>
      <dc:creator>Ricardo Medeiros</dc:creator>
      <pubDate>Thu, 11 Dec 2025 15:19:31 +0000</pubDate>
      <link>https://dev.to/jjackbauer/you-probably-understood-the-client-server-model-wrong-3ae1</link>
      <guid>https://dev.to/jjackbauer/you-probably-understood-the-client-server-model-wrong-3ae1</guid>
      <description>&lt;p&gt;Most developers intuitively grasp the client–server model. They know the backend must serve many users while the frontend runs a single isolated instance per person. The problem is not one of misunderstanding but of misapplied mental models.&lt;/p&gt;

&lt;p&gt;Too often, frontend interaction patterns quietly seep into backend architecture. This happens almost invisibly: because a user clicks a button and waits, the backend endpoint is implemented to perform the entire workflow before returning.&lt;/p&gt;

&lt;p&gt;Because the UI shows near-real-time progress, developers reach for WebSockets, SSE, or long-polling, assuming the backend must act with the same immediacy. As the interface suggests an atomic action, the backend is built to execute a full, complex sequence in a single synchronous request.&lt;/p&gt;

&lt;p&gt;These decisions feel natural when thinking from the user outward, but they ignore the fact that frontend and backend systems live under radically different constraints. The frontend exists per user, per tab, per device. If it blocks or performs work synchronously, only one person is affected.&lt;/p&gt;

&lt;p&gt;In the other hand, the backend is a shared computational surface, serving thousands of simultaneous users from a limited pool of machines. A blocking call that seems harmless in the UI becomes costly when multiplied by a large user base.&lt;/p&gt;

&lt;p&gt;This distinction is critical. For lightweight operations—like fetching a user profile or toggling a setting—mimicking the frontend’s synchronous expectations is perfectly acceptable; the resource cost is negligible. The danger arises when we apply that same 'request-response' immediacy to complex business logic or third-party integrations.&lt;/p&gt;

&lt;p&gt;When a heavy, valuable workflow is forced to fit inside the fragile lifespan of a simple HTTP request, the backend stops being a scalable coordinator and becomes a brittle bottleneck.&lt;/p&gt;

&lt;p&gt;Consequently, a heavy workflow executed synchronously does not scale when many people trigger it at once. Frontend mental models make perfect sense for the UI layer but become harmful when projected onto the backend.&lt;/p&gt;

&lt;p&gt;The first step toward correcting this misalignment is to visualize the structural asymmetry. Each user runs their own frontend instance, but a comparatively small set of backend servers must collectively serve all of them.&lt;/p&gt;

&lt;p&gt;When design decisions assume parity between these layers, the backend becomes overloaded—not because of traffic volume alone, but because of how tightly the backend’s execution model is coupled to human interactions.&lt;/p&gt;

&lt;h1&gt;
  
  
  Frontend Per User, Backend Shared by Everyone
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijjwccx2rc2fa37vvitj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fijjwccx2rc2fa37vvitj.png" alt=" " width="800" height="899"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This diagram illustrates the heart of the issue: every user has a dedicated frontend instance, but all users share the backend. If backend routes mimic user-level workflows synchronously, they consume server resources as if each request were serving only that user.&lt;/p&gt;

&lt;p&gt;This leads to patterns where backend operations are tied to the lifetime of an HTTP request or WebSocket connection, causing servers to hold open sockets, allocate memory, keep database transactions alive, or block worker threads while waiting for slow I/O. These patterns work fine when thinking like a single UI instance, but they collapse under real-world concurrency.&lt;/p&gt;

&lt;p&gt;Even when developers use asynchronous programming constructs—such as async/await—the architecture remains synchronous if long-running work is still executed within the request lifecycle.&lt;/p&gt;

&lt;p&gt;Architectural asynchrony is not achieved by non-blocking code alone; it comes from offloading work, decoupling the response from the completion of the operation, and treating the backend as a shared coordinator of workflows rather than a synchronous executor of user-driven commands.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Shift Toward Asynchronous Endpoints
&lt;/h1&gt;

&lt;p&gt;A backend that scales must adopt a different perspective: the purpose of an endpoint is not to “complete the job” but to “accept the job.” True asynchronous architecture emerges not from code-level semantics but from systemic decoupling.&lt;/p&gt;

&lt;p&gt;The backend should perform only the minimal steps required to validate input, authenticate the caller, record the intent, and enqueue the real work elsewhere. It should return immediately—often with a 202 Accepted status—handing back a job identifier that the frontend can use to check progress.&lt;/p&gt;

&lt;p&gt;This “accept, then process” pattern changes the nature of the system entirely. Instead of backend servers waiting idly during long-running operations, the work is delegated to background processors or workers that run independently of the request lifecycle. This frees backend capacity for new incoming requests, allowing the system to serve more users without proportional increases in server count.&lt;/p&gt;

&lt;h2&gt;
  
  
  Asynchronous Workflow Sequence
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figf8rjbe2bwj9j439ja8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Figf8rjbe2bwj9j439ja8.png" alt=" " width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this design, the backend no longer mirrors the interaction model of the UI. The frontend and backend operate on independent clocks: the UI can poll for updates at its own pace, and the backend can schedule work according to resource availability. &lt;/p&gt;

&lt;p&gt;Polling, often dismissed as simplistic, becomes a powerful pattern for background workflows because it keeps servers stateless and scalable, avoids long-lived connections, and shifts the complexity of immediacy away from the backend.&lt;/p&gt;

&lt;h1&gt;
  
  
  Status Endpoints and Cache-Assisted Polling
&lt;/h1&gt;

&lt;p&gt;When the frontend receives a job identifier, it begins periodically querying a /jobs/:id endpoint for status updates. This endpoint is simple by design: it retrieves the current state of a workflow from the database or, more efficiently, from a cache.&lt;/p&gt;

&lt;p&gt;This pattern allows backend servers to remain fully stateless. As no request needs to wait for job completion, the backend can scale horizontally without sticky sessions or shared in-memory state. Caching enhances this model. A cache-aside pattern—where the API checks the cache first and falls back to the database only on a miss—is often sufficient for moderate traffic. &lt;/p&gt;

&lt;p&gt;For larger systems, a write-through approach pays off: workers update both the database and the cache when job statuses change. This ensures that nearly all frontend polling requests hit the cache rather than the database, dramatically reducing load.&lt;/p&gt;

&lt;h2&gt;
  
  
  Status Endpoint Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpvqdakbmueq59fklbsd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftpvqdakbmueq59fklbsd.png" alt=" " width="800" height="660"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This architecture exemplifies how decoupled systems naturally scale. Backend servers focus solely on orchestrating requests and responding quickly. Workers and queues handle the heavy lifting. Polling reads mostly from cache. No part of the system depends on a long-lived user connection.&lt;/p&gt;

&lt;h1&gt;
  
  
  How Serverless Completes the Picture
&lt;/h1&gt;

&lt;p&gt;Serverless platforms elevate this model by providing isolation and elasticity at the execution level. While backend servers remain stable and responsive, serverless functions scale independently based on event volume. &lt;/p&gt;

&lt;p&gt;Each job can run in its own execution context without interfering with others, effectively giving the backend a burst capacity proportional to incoming workload rather than to the number of servers provisioned.&lt;/p&gt;

&lt;p&gt;This division of responsibilities is transformative. Backend servers become thin control planes responsible only for coordination, while serverless workers perform compute-heavy or long-running tasks. Instead of scaling servers, the system scales events. When a thousand users submit jobs simultaneously, a thousand serverless functions can execute in parallel without any user blocking the backend.&lt;/p&gt;

&lt;p&gt;Serverless architecture embodies the principle that the backend should not behave like the UI. Instead of executing workflows synchronously, backend servers delegate. Instead of waiting, they acknowledge. Instead of owning long-lived operations, they outsource them to stateless, ephemeral functions optimized for scaling.&lt;/p&gt;

&lt;h1&gt;
  
  
  A New Mental Model for Cloud-Native Backends
&lt;/h1&gt;

&lt;p&gt;Reframing the backend begins with recognizing that frontend patterns should not dictate backend structures. The UI can wait; the backend must not. The UI can block; the backend must stay available. The UI serves a single human; the backend serves everyone simultaneously.&lt;/p&gt;

&lt;p&gt;When dealing with simple CRUD-like operations that lack aggregated business logic, a synchronous model remains the simplest and most effective choice. However, when involving complex business logic, third-party integrations, or intensive traffic peaks, that same model creates a bottleneck. In those high-stakes scenarios, synchronous coupling leads to critical failures that can bring the entire system down.&lt;/p&gt;

&lt;p&gt;Once this mental shift clicks, it becomes clear why asynchronous endpoints, polling, queues, workers, caches, and serverless execution represent not advanced architectural strategies but necessary correctives to an intuitive but incorrect assumption: that backend systems should behave like the interfaces that sit on top of them.&lt;/p&gt;

&lt;p&gt;Backend architecture must reflect backend realities. Designing it with a frontend mindset is what leads to systems that stall, choke, or become expensive to operate. Designing it with asynchronous principles produces architectures that scale effortlessly, fail gracefully, and serve users far beyond what synchronous workflows could ever sustain.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Marked UUIDs Pattern</title>
      <dc:creator>Ricardo Medeiros</dc:creator>
      <pubDate>Fri, 28 Jun 2024 20:24:40 +0000</pubDate>
      <link>https://dev.to/jjackbauer/marked-uuids-pattern-3a2n</link>
      <guid>https://dev.to/jjackbauer/marked-uuids-pattern-3a2n</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;So, here I'm again. Writing after some time without doing anything like it, but I've decided to share this, because I think this solution saved a lot of work and problems that system could had. The context was this: my team was responsible for an application that was been integrated into a SAP system by another company, and, we had several PSP integrations to provide our services.&lt;/p&gt;

&lt;p&gt;Some of this PSP test enviroments didn't work at all, they were flaky and inconsistant with responses and even the types of the response properties between the test and production environment. We needed to provide a way for the SAP consultancy to be able to test the functionalities of our plataform without the instability of the PSP test enviroment, and then, we decided to add a flag to mock the PSP response to be successfull (since it was the outcome that we were more interested in).&lt;/p&gt;

&lt;p&gt;It was all great, untill we realized that we needed to test the refund functionality. Now what? As our application was built upon microsservices with domain segregation. We had an Anti-Corruption-Layer (ACL) between it and the PSP, so we had no idea of what was happening there. Both payment and refund were in the ACL, but it hadn't any persistance, as was built to be a facade between the third-party  service provider and our domain. How could we determine if a payment was done in the actual service provider test environment or was mocked by our ACL?&lt;/p&gt;

&lt;h2&gt;
  
  
  Aproaches
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Add a parameter in the response of the ACL indicating if this response is mocked or not;&lt;/li&gt;
&lt;li&gt;Add a database in the ACL to save all the payments created, and if they were mocked or not.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Aditional Parameter
&lt;/h3&gt;

&lt;p&gt;If we had chosen this path, it would require that the application consuming the ACL was changed, to consider this new information about mocking, then, this consuming application would need to save this in order to provide this information back to the ACL when refund were requested. It required to much development just to enable this, and it didn't add anything to our product, just it's testability experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Database in the ACL
&lt;/h3&gt;

&lt;p&gt;Imagine this, a service that operated without databases in production would require one to work in test enviroment. I promptly refused, even not enjoying the first option, this was way worst. Not only would require aditional infrastructure, but also add futher complexity and latency to the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Avoinding these pains
&lt;/h2&gt;

&lt;p&gt;So, I've started to think. How could I know this information without adding any furter infrastructure or adding parameters? Somehow, I needed to be able to split this register with the information that they already had. With that in mind, I've started to come up with a solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  UUID - The Trojan Horse
&lt;/h3&gt;

&lt;p&gt;Every transation would return an UUID that should be generated by the PSP (or not). Somehow mocking this information in a way that it would be differentiable from the PSP generated ID's wouldn't affect the transaction information, all the further information required would be maintained. With that in mind, I wen't on several experiments in order to make this possible, and that's how I've Achieved it:&lt;/p&gt;

&lt;h3&gt;
  
  
  Marked UUID
&lt;/h3&gt;

&lt;p&gt;UUIDs are pseudo-random 128-bit identifiers that are often represented as 32 hexadecimal digits, displayed in five groups separated by hyphens, in the form 8-4-4-4-12 (e.g., 123e4567-e89b-12d3-a456-426614174000). &lt;/p&gt;

&lt;h4&gt;
  
  
  UUIDv4
&lt;/h4&gt;

&lt;p&gt;UUIDv4 is one of the most commonly used versions of UUIDs. It is designed to be a universally unique identifier generated using random or pseudo-random numbers. Here's a detailed overview of UUIDv4:&lt;/p&gt;

&lt;h5&gt;
  
  
  Characteristics of UUIDv4
&lt;/h5&gt;

&lt;p&gt;Randomness: UUIDv4 relies on random or pseudo-random numbers to generate the unique identifier.&lt;br&gt;
No External Information: Unlike other versions of UUIDs, such as UUIDv1 (which incorporates timestamps and MAC addresses), UUIDv4 does not include any external information about the generating system.&lt;br&gt;
Simplicity: UUIDv4 is straightforward to implement because it only requires a good source of randomness.&lt;/p&gt;
&lt;h5&gt;
  
  
  Structure of UUIDv4
&lt;/h5&gt;

&lt;p&gt;A UUIDv4 is a 128-bit value, typically represented as a 36-character string in the format 8-4-4-4-12, separated by hyphens.&lt;/p&gt;
&lt;h5&gt;
  
  
  Example UUIDv4
&lt;/h5&gt;

&lt;p&gt;f47ac10b-58cc-4372-a567-0e02b2c3d479&lt;/p&gt;
&lt;h5&gt;
  
  
  Breakdown of UUIDv4
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;8 characters: Randomly generated 32 bits.&lt;/li&gt;
&lt;li&gt;4 characters: Randomly generated 16 bits.&lt;/li&gt;
&lt;li&gt;4 characters: Randomly generated 12 bits, and 4 bits for the version (0100 for version 4).&lt;/li&gt;
&lt;li&gt;4 characters: Two or three bits for the variant (10x for RFC 4122), and - 13 or 14 randomly generated bits.&lt;/li&gt;
&lt;li&gt;12 characters: Randomly generated 48 bits.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Marking UUIDv4
&lt;/h4&gt;

&lt;p&gt;The process to mark this ID should be something that couldn't be done by an valid UUID generation, but It should be compatible with UUID implementations in order to be stored sucesfully, avoiding any changes outter the Anti-Corruption Layer code.&lt;/p&gt;

&lt;p&gt;I've selected the last 12 characters section to mark the ID, but I needed for it to be unmistaken with real IDs, so I've devised this algorithm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Take the ID without the last 12 characters and the hypens&lt;/li&gt;
&lt;li&gt;Use a hash algorithm like SHA-256 to creat a mark&lt;/li&gt;
&lt;li&gt;Grab the first 12 characters (48 bits) of the hash and replace the original generated 12 characters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it. Stupidly simple, works with the language implementation (in my case was golang) and was unmistaken with a real gerenated UUID. But how can we determine if an ID is marked? It is as simple as to create one:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Take the ID without the last 12 characters and the hypens&lt;/li&gt;
&lt;li&gt;Use a hash algorithm like SHA-256 to creat a mark&lt;/li&gt;
&lt;li&gt;Grab the first 12 characters (48 bits) of the hash and compare it to the IDs last 12, if they're equal, this ID was marked.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;So, how did I've employed this to solve the problem? Simple, when I had the flag to mock the PSP responses enabled, I've returned at the payment endpoint the mocked response with an marked ID. In the refund endpoint, if the flag was enabled, I checked if the ID was marked, if so, I would return a mocked refund response. If not, I would make the PSP call, as this payment was created in the PSP enviroment.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If something you gotta do seems to require too much corromption to your application, look for alternative approaches to this problem and don't be afraid to try new things out!&lt;/p&gt;

&lt;p&gt;Any sugestions, comments or corrections, fell free to reach me out at &lt;a href="https://www.linkedin.com/in/rmedio/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__757631"&gt;
    &lt;a href="/jjackbauer" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F757631%2F76940fdf-9f04-4bd8-9217-d577d788ffef.jpeg" alt="jjackbauer image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/jjackbauer"&gt;Ricardo Medeiros&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/jjackbauer"&gt;Senior Cloud Software Engineer @Caylent&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>microservices</category>
      <category>thirdpartyintegration</category>
      <category>testenvironment</category>
      <category>webdev</category>
    </item>
    <item>
      <title>KAFKA + KSQLDB + .NET #1</title>
      <dc:creator>Ricardo Medeiros</dc:creator>
      <pubDate>Tue, 30 Nov 2021 17:19:05 +0000</pubDate>
      <link>https://dev.to/vaivoa/kafka-ksqldb-net-1-40g4</link>
      <guid>https://dev.to/vaivoa/kafka-ksqldb-net-1-40g4</guid>
      <description>&lt;p&gt;Hi, I'm &lt;a href="https://github.com/jjackbauer" rel="noopener noreferrer"&gt;Ricardo Medeiros&lt;/a&gt;, .NET back end developer @vaivoa, and today I'm going to walk you through using ksqlDB to query messages produced in kafka by a .NET/C# producer. For this example, I will be deploying my enviroment as containers, described in a docker compose file, to ensure easy reproducibility of my results.&lt;/p&gt;

&lt;p&gt;The source code used in this example is avaliable &lt;a href="https://github.com/jjackbauer/ksqlDBDemo" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Services
&lt;/h2&gt;

&lt;p&gt;First, let's talk about the docker compose environment services. the file is avaliable &lt;a href="https://github.com/jjackbauer/ksqlDBDemo/blob/main/docker-compose.yml" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  .NET API Producer
&lt;/h3&gt;

&lt;p&gt;Automaticaly generated .NET api with docker compose service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ksqldbdemo:
    container_name: ksqldbdemo
    image: ${DOCKER_REGISTRY-}ksqldbdemo
    build:
      context: .
      dockerfile: Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This producer service needs the .NET generated dockerfile shown below:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["ksqlDBDemo.csproj", "."]
RUN dotnet restore "ksqlDBDemo.csproj"
COPY . .
WORKDIR "/src/"
RUN dotnet build "ksqlDBDemo.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "ksqlDBDemo.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "ksqlDBDemo.dll"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  ZooKeeper
&lt;/h3&gt;

&lt;p&gt;Despite not been necessary since Kafka 2.8, ZooKeeper coordinates kafka tasks, defining controllers, cluster membership, topic configuration and more. In this tutorial, it's used the confluent inc. ZooKeeper image, due to it's use in the reference material. It makes Kafka more reliable, but adds complexity into the system.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zookeeper:
    image: confluentinc/cp-zookeeper:7.0.0
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Kafka
&lt;/h3&gt;

&lt;p&gt;Kafka is an event streaming plataform capable of handling trillions of events a day. Kafka is based on the abstraction of an distributed commit log. Initialiy developed at LinkedIn in 2011 to work as a message queue, but it has evolved into a full-fledge event streanming platfmorm. Listed as broker in the services, is the core of this tutorial. It's configuration is tricky, but using it as follows worked well in this scenario.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; broker:
    image: confluentinc/cp-kafka:7.0.0
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  ksqlDB
&lt;/h3&gt;

&lt;p&gt;ksqlDB is a database built to allow distributed stream process applications. Made to work seamsly with kafka, it has a server that runs outside of kafka, with a REST API and a CLI application that can be run separatly and it's used in this tutorial.&lt;/p&gt;
&lt;h4&gt;
  
  
  ksqlDB Server
&lt;/h4&gt;

&lt;p&gt;In this example, it's used the confluent inc image of the ksqlDB server, once more, due to it's widespread usage.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ksqldb-server:
    image: confluentinc/ksqldb-server:0.22.0
    hostname: ksqldb-server
    container_name: ksqldb-server
    depends_on:
      - broker
    ports:
      - "8088:8088"
    environment:
      KSQL_LISTENERS: http://0.0.0.0:8088
      KSQL_BOOTSTRAP_SERVERS: broker:29092
      KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  ksqlDB CLI
&lt;/h4&gt;

&lt;p&gt;The same goes for the ksqlDB CLI service, that also use the confluent inc image.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ksqldb-cli:
    image: confluentinc/ksqldb-cli:0.22.0
    container_name: ksqldb-cli
    depends_on:
      - broker
      - ksqldb-server
    entrypoint: /bin/sh
    tty: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Kafdrop
&lt;/h3&gt;

&lt;p&gt;Kafdrop is a Web UI for viewing kafka topics and browsing consumer groups. It makes kafka more accessible.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kafdrop:
    container_name: kafdrop
    image: obsidiandynamics/kafdrop:latest
    depends_on:
      - broker
    ports:
      - 19000:9000
    environment:
      KAFKA_BROKERCONNECT: broker:29092
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Tutorial
&lt;/h2&gt;

&lt;p&gt;Now it's the time that you have been waiting, let's make it work!&lt;/p&gt;
&lt;h3&gt;
  
  
  Enviroment
&lt;/h3&gt;

&lt;p&gt;For this tutorial, you'll need a &lt;a href="https://docs.docker.com/get-docker/" rel="noopener noreferrer"&gt;docker desktop&lt;/a&gt; installation, either it's on a Linux distribution or on Windows with WSL and &lt;a href="https://git-scm.com/downloads" rel="noopener noreferrer"&gt;git&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Cloning the project
&lt;/h3&gt;

&lt;p&gt;A Visual Studio project is avaliable &lt;a href="https://github.com/jjackbauer/ksqlDBDemo" rel="noopener noreferrer"&gt;here&lt;/a&gt;, it has docker support and already deploys all the services needed for this demo in the IDE. However, you will be fine if you don't want or can't use Visual Studio. Just  clone it, running the following comand on the terminal and directory of your preference:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ git clone https://github.com/jjackbauer/ksqlDBDemo.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Use the following command to move to the project folder:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ cd /ksqlDBDemo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And, in the project folder, that contains the docker-compose.yml run the following command to deploy the services:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;after this command, make sure that all services are running. Sometimes services fall, but it is okay. In order to see if everything is running ok, it's possible to see the services running in docker desktop, as shown bellow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uakexrf3p7atec8q7k9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uakexrf3p7atec8q7k9.PNG" alt="Docker Desktop" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or you can execute the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Which should output something like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CONTAINER ID   IMAGE                               COMMAND                  CREATED       STATUS       PORTS
  NAMES
b42ce9954fd9   ksqldbdemo_ksqldbdemo               "dotnet ksqlDBDemo.d…"   2 hours ago   Up 2 hours   0.0.0.0:9009-&amp;gt;80/tcp, 0.0.0.0:52351-&amp;gt;443/tcp   ksqldbdemo
0a0186712553   confluentinc/ksqldb-cli:0.22.0      "/bin/sh"                2 hours ago   Up 2 hours
  ksqldb-cli
76519de6946e   obsidiandynamics/kafdrop:latest     "/kafdrop.sh"            2 hours ago   Up 2 hours   0.0.0.0:19000-&amp;gt;9000/tcp
  kafdrop
11c3a306ee01   confluentinc/ksqldb-server:0.22.0   "/usr/bin/docker/run"    2 hours ago   Up 2 hours   0.0.0.0:8088-&amp;gt;8088/tcp
  ksqldb-server
07cef9d69267   confluentinc/cp-kafka:7.0.0         "/etc/confluent/dock…"   2 hours ago   Up 2 hours   9092/tcp, 0.0.0.0:29092-&amp;gt;29092/tcp
  broker
3fa1b9a60954   confluentinc/cp-zookeeper:7.0.0     "/etc/confluent/dock…"   2 hours ago   Up 2 hours   2888/tcp, 0.0.0.0:2181-&amp;gt;2181/tcp, 3888/tcp     zookeeper
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  WEB API
&lt;/h3&gt;

&lt;p&gt;Now, with all services up and running, we can access the WEB API Swagger to populate our Kafka topics. The code is very simple and it's avaliable in the &lt;a href="https://github.com/jjackbauer/ksqlDBDemo" rel="noopener noreferrer"&gt;repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The WEB API swagger is deployed at &lt;a href="http://localhost:9009/swagger/index.html" rel="noopener noreferrer"&gt;http://localhost:9009/swagger/index.html&lt;/a&gt;. As shown in the image bellow, it has two endpoints and they create events that could be created by indepent microservices. One for creating an event that creates a userName in the system and another that takes an Id and generates a three digit code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99l7s38ffu0tx1gon62r.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99l7s38ffu0tx1gon62r.PNG" alt="Swagger Geral" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you can create an User with the user name of your choise, as shown:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxlhjsmfeei7wnym834g.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxlhjsmfeei7wnym834g.PNG" alt="Request Create user" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And it will have an assigned unique Id, as demonstrated:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g06gff8025nxw6gze9v.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g06gff8025nxw6gze9v.PNG" alt="Response create user" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, you can get a three digit code for your user Id as displayed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yoezrw53wibte9baiaj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yoezrw53wibte9baiaj.PNG" alt="Get Code Request" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And a random code is generated for the selectd, as we can observe in the image that follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazp6yri8cy4x4day7vyu.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazp6yri8cy4x4day7vyu.PNG" alt="Get Code Response" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Kafdrop
&lt;/h3&gt;

&lt;p&gt;We can use the kafdrop UI the check if everything is okay. Kafdrop is deployed at &lt;a href="http://localhost:19000/" rel="noopener noreferrer"&gt;http://localhost:19000/&lt;/a&gt;.&lt;br&gt;
There, you will find all the brokers and topics avaliable. It should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9v29cx12cqw5cnp19sr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9v29cx12cqw5cnp19sr.PNG" alt="Kafdrop" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  KSQL CLI
&lt;/h3&gt;

&lt;p&gt;After all that, you'll be able to create your streams of data and query it using ksqlDB. On your preferential terminal, use the command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it ksqldb-cli ksql http://ksqldb-server:8088
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Creating streams
&lt;/h4&gt;

&lt;p&gt;And then you are in the ksql CLI and are free to create your streams and queries. First, let's create a stream for each one of our topics:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE STREAM stream_user (Name VARCHAR, Id VARCHAR)
  WITH (kafka_topic='demo-user', value_format='json', partitions=1);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE STREAM stream_code (Id VARCHAR, code INT)
  WITH (kafka_topic='demo-code', value_format='json', partitions=1);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Create a materialized view
&lt;/h4&gt;

&lt;p&gt;You can join the client data with the most recent randomized code. to achieve this, you must create a materialized view table, that joins both streams as seen in the ksqldb script that follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE currentCodeView AS
&amp;gt;   SELECT user.Name,
&amp;gt;   LATEST_BY_OFFSET(code.code) AS CurrentCode
&amp;gt;   FROM stream_code code INNER JOIN stream_user user
&amp;gt;   WITHIN 7 DAYS ON code.Id = user.Id
&amp;gt;   GROUP BY user.Name
&amp;gt;EMIT CHANGES;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Making a push query
&lt;/h4&gt;

&lt;p&gt;After that, we can query this materialized view:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM currentCodeView 
  EMIT CHANGES;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This push query keep on running until you hit cntrl+c to cancel it.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;In this tutorial it's demonstrated that in a kafka + ksqlDB enviroment, you can make SQL queries and also join on data that comes from different events, which is one of most complexities envolved with microsservices systems. And it is what ksqlDB solves by enabling SQL operations over Kafka topics.&lt;br&gt;
It's my goal to explore the possibilites allowed by this ecosystem and I hope to bring more knowledge on this topic in another articles here. Any sugestions, comments or corrections, fell free to reach me out at &lt;a href="https://www.linkedin.com/in/rmedio/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__757631"&gt;
    &lt;a href="/jjackbauer" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F757631%2F76940fdf-9f04-4bd8-9217-d577d788ffef.jpeg" alt="jjackbauer image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/jjackbauer"&gt;Ricardo Medeiros&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/jjackbauer"&gt;Senior Cloud Software Engineer @Caylent&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://ksqldb.io/quickstart.html?_ga=2.218008467.482211024.1638022122-847939024.1633623088&amp;amp;_gac=1.142412294.1634140787.EAIaIQobChMIjOL6pt_H8wIVmcWaCh1KbwgwEAEYASAAEgLBFvD_BwE" rel="noopener noreferrer"&gt;ksqlDB Quickstart&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.confluent.io/platform/current/ksqldb/index.html#ksql-home" rel="noopener noreferrer"&gt;ksqlDB Overview&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.confluent.io/clients-confluent-kafka-dotnet/current/overview.html" rel="noopener noreferrer"&gt;Kafka .NET Client&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.ksqldb.io/en/latest/reference/sql/data-types/" rel="noopener noreferrer"&gt;ksqlDB Documentation - Data Types Overview&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.ksqldb.io/en/latest/operate-and-deploy/ksql-vs-ksqldb/" rel="noopener noreferrer"&gt;KSQL and ksqlDB&lt;/a&gt;&lt;br&gt;
&lt;a href="https://zookeeper.apache.org/" rel="noopener noreferrer"&gt;Welcome to Apache ZooKeeper&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dattell.com/data-architecture-blog/what-is-zookeeper-how-does-it-support-kafka/" rel="noopener noreferrer"&gt;What is ZooKeeper &amp;amp; How Does it Support Kafka?&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.confluent.io/what-is-apache-kafka/?utm_medium=sem&amp;amp;utm_source=google&amp;amp;utm_campaign=ch.sem_br.nonbrand_tp.prs_tgt.kafka_mt.xct_rgn.latam_lng.eng_dv.all_con.kafka-general&amp;amp;utm_term=apache%20kafka&amp;amp;creative=&amp;amp;device=c&amp;amp;placement=&amp;amp;gcli&amp;lt;br&amp;gt;%0Ad=Cj0KCQiA7oyNBhDiARIsADtGRZYDVaYjkPkoJQHNrz_xBodIq2P8ztwb8g3OTiRG_wMHXyzof1nqKEMaAoT_EALw_wcB" rel="noopener noreferrer"&gt;What is Apache Kafka®?&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ksqldb.io/" rel="noopener noreferrer"&gt;ksqlDB - The database purpose-built for stream processing applications&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ksqldb.io/overview.html" rel="noopener noreferrer"&gt;An overview of ksqlDB&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.ksqldb.io/en/latest/developer-guide/ksqldb-reference/create-table-as-select/" rel="noopener noreferrer"&gt;CREATE TABLE AS SELECT&lt;/a&gt;&lt;br&gt;
&lt;a href="https://kafka-tutorials.confluent.io/join-a-stream-to-a-stream/ksql.html" rel="noopener noreferrer"&gt;How to join a stream and a stream&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.ksqldb.io/en/latest/concepts/time-and-windows-in-ksqldb-queries/" rel="noopener noreferrer"&gt;Time and Windows in ksqlDB Queries&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.ksqldb.io/en/latest/reference/sql/time/" rel="noopener noreferrer"&gt;Time operations&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8bndcx2jkn1jz1dy98v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8bndcx2jkn1jz1dy98v.png" alt="linha horizontal" width="800" height="3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Disclaimer
&lt;/h1&gt;

&lt;p&gt;A VaiVoa incentiva seus Desenvolvedores em seu processo de crescimento e aceleração técnica. Os artigos publicados não traduzem a opinião da VaiVoa. A publicação obedece ao propósito de estimular o debate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wmziqv74ghhgyi9p0om.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wmziqv74ghhgyi9p0om.png" alt="logo vaivoa" width="548" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>ksqldb</category>
      <category>microservices</category>
      <category>docker</category>
    </item>
    <item>
      <title>KAFKA + KSQLDB + .NET #1</title>
      <dc:creator>Ricardo Medeiros</dc:creator>
      <pubDate>Mon, 29 Nov 2021 20:13:06 +0000</pubDate>
      <link>https://dev.to/jjackbauer/kafka-ksqldb-net-19kc</link>
      <guid>https://dev.to/jjackbauer/kafka-ksqldb-net-19kc</guid>
      <description>&lt;p&gt;Hi, I'm &lt;a href="https://github.com/jjackbauer" rel="noopener noreferrer"&gt;Ricardo Medeiros&lt;/a&gt;, .NET back end developer @vaivoa, and today I'm going to walk you through using ksqlDB to query messages produced in kafka by a .NET/C# producer. For this example, I will be deploying my enviroment as containers, described in a docker compose file, to ensure easy reproducibility of my results.&lt;/p&gt;

&lt;p&gt;The source code used in this example is avaliable &lt;a href="https://github.com/jjackbauer/ksqlDBDemo" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Services
&lt;/h2&gt;

&lt;p&gt;First, let's talk about the docker compose environment services. the file is avaliable &lt;a href="https://github.com/jjackbauer/ksqlDBDemo/blob/main/docker-compose.yml" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  .NET API Producer
&lt;/h3&gt;

&lt;p&gt;Automaticaly generated .NET api with docker compose service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ksqldbdemo:
    container_name: ksqldbdemo
    image: ${DOCKER_REGISTRY-}ksqldbdemo
    build:
      context: .
      dockerfile: Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This producer service needs the .NET generated dockerfile shown below:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["ksqlDBDemo.csproj", "."]
RUN dotnet restore "ksqlDBDemo.csproj"
COPY . .
WORKDIR "/src/"
RUN dotnet build "ksqlDBDemo.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "ksqlDBDemo.csproj" -c Release -o /app/publish

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "ksqlDBDemo.dll"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  ZooKeeper
&lt;/h3&gt;

&lt;p&gt;Despite not been necessary since Kafka 2.8, ZooKeeper coordinates kafka tasks, defining controllers, cluster membership, topic configuration and more. In this tutorial, it's used the confluent inc. ZooKeeper image, due to it's use in the reference material. It makes Kafka more reliable, but adds complexity into the system.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zookeeper:
    image: confluentinc/cp-zookeeper:7.0.0
    hostname: zookeeper
    container_name: zookeeper
    ports:
      - "2181:2181"
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Kafka
&lt;/h3&gt;

&lt;p&gt;Kafka is an event streaming plataform capable of handling trillions of events a day. Kafka is based on the abstraction of an distributed commit log. Initialiy developed at LinkedIn in 2011 to work as a message queue, but it has evolved into a full-fledge event streanming platfmorm. Listed as broker in the services, is the core of this tutorial. It's configuration is tricky, but using it as follows worked well in this scenario.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; broker:
    image: confluentinc/cp-kafka:7.0.0
    hostname: broker
    container_name: broker
    depends_on:
      - zookeeper
    ports:
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  ksqlDB
&lt;/h3&gt;

&lt;p&gt;ksqlDB is a database built to allow distributed stream process applications. Made to work seamsly with kafka, it has a server that runs outside of kafka, with a REST API and a CLI application that can be run separatly and it's used in this tutorial.&lt;/p&gt;
&lt;h4&gt;
  
  
  ksqlDB Server
&lt;/h4&gt;

&lt;p&gt;In this example, it's used the confluent inc image of the ksqlDB server, once more, due to it's widespread usage.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ksqldb-server:
    image: confluentinc/ksqldb-server:0.22.0
    hostname: ksqldb-server
    container_name: ksqldb-server
    depends_on:
      - broker
    ports:
      - "8088:8088"
    environment:
      KSQL_LISTENERS: http://0.0.0.0:8088
      KSQL_BOOTSTRAP_SERVERS: broker:29092
      KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
      KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  ksqlDB CLI
&lt;/h4&gt;

&lt;p&gt;The same goes for the ksqlDB CLI service, that also use the confluent inc image.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ksqldb-cli:
    image: confluentinc/ksqldb-cli:0.22.0
    container_name: ksqldb-cli
    depends_on:
      - broker
      - ksqldb-server
    entrypoint: /bin/sh
    tty: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Kafdrop
&lt;/h3&gt;

&lt;p&gt;Kafdrop is a Web UI for viewing kafka topics and browsing consumer groups. It makes kafka more accessible.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kafdrop:
    container_name: kafdrop
    image: obsidiandynamics/kafdrop:latest
    depends_on:
      - broker
    ports:
      - 19000:9000
    environment:
      KAFKA_BROKERCONNECT: broker:29092
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Tutorial
&lt;/h2&gt;

&lt;p&gt;Now it's the time that you have been waiting, let's make it work!&lt;/p&gt;
&lt;h3&gt;
  
  
  Enviroment
&lt;/h3&gt;

&lt;p&gt;For this tutorial, you'll need a &lt;a href="https://docs.docker.com/get-docker/" rel="noopener noreferrer"&gt;docker desktop&lt;/a&gt; installation, either it's on a Linux distribution or on Windows with WSL and &lt;a href="https://git-scm.com/downloads" rel="noopener noreferrer"&gt;git&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Cloning the project
&lt;/h3&gt;

&lt;p&gt;A Visual Studio project is avaliable &lt;a href="https://github.com/jjackbauer/ksqlDBDemo" rel="noopener noreferrer"&gt;here&lt;/a&gt;, it has docker support and already deploys all the services needed for this demo in the IDE. However, you will be fine if you don't want or can't use Visual Studio. Just  clone it, running the following comand on the terminal and directory of your preference:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ git clone https://github.com/jjackbauer/ksqlDBDemo.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Use the following command to move to the project folder:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; $ cd /ksqlDBDemo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And, in the project folder, that contains the docker-compose.yml run the following command to deploy the services:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;after this command, make sure that all services are running. Sometimes services fall, but it is okay. In order to see if everything is running ok, it's possible to see the services running in docker desktop, as shown bellow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uakexrf3p7atec8q7k9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uakexrf3p7atec8q7k9.PNG" alt="Docker Desktop" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or you can execute the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Which should output something like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CONTAINER ID   IMAGE                               COMMAND                  CREATED       STATUS       PORTS
  NAMES
b42ce9954fd9   ksqldbdemo_ksqldbdemo               "dotnet ksqlDBDemo.d…"   2 hours ago   Up 2 hours   0.0.0.0:9009-&amp;gt;80/tcp, 0.0.0.0:52351-&amp;gt;443/tcp   ksqldbdemo
0a0186712553   confluentinc/ksqldb-cli:0.22.0      "/bin/sh"                2 hours ago   Up 2 hours
  ksqldb-cli
76519de6946e   obsidiandynamics/kafdrop:latest     "/kafdrop.sh"            2 hours ago   Up 2 hours   0.0.0.0:19000-&amp;gt;9000/tcp
  kafdrop
11c3a306ee01   confluentinc/ksqldb-server:0.22.0   "/usr/bin/docker/run"    2 hours ago   Up 2 hours   0.0.0.0:8088-&amp;gt;8088/tcp
  ksqldb-server
07cef9d69267   confluentinc/cp-kafka:7.0.0         "/etc/confluent/dock…"   2 hours ago   Up 2 hours   9092/tcp, 0.0.0.0:29092-&amp;gt;29092/tcp
  broker
3fa1b9a60954   confluentinc/cp-zookeeper:7.0.0     "/etc/confluent/dock…"   2 hours ago   Up 2 hours   2888/tcp, 0.0.0.0:2181-&amp;gt;2181/tcp, 3888/tcp     zookeeper
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  WEB API
&lt;/h3&gt;

&lt;p&gt;Now, with all services up and running, we can access the WEB API Swagger to populate our Kafka topics. The code is very simple and it's avaliable in the &lt;a href="https://github.com/jjackbauer/ksqlDBDemo" rel="noopener noreferrer"&gt;repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The WEB API swagger is deployed at &lt;a href="http://localhost:9009/swagger/index.html" rel="noopener noreferrer"&gt;http://localhost:9009/swagger/index.html&lt;/a&gt;. As shown in the image bellow, it has two endpoints and they create events that could be created by indepent microservices. One for creating an event that creates a userName in the system and another that takes an Id and generates a three digit code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99l7s38ffu0tx1gon62r.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99l7s38ffu0tx1gon62r.PNG" alt="Swagger Geral" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you can create an User with the user name of your choise, as shown:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxlhjsmfeei7wnym834g.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmxlhjsmfeei7wnym834g.PNG" alt="Request Create user" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And it will have an assigned unique Id, as demonstrated:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g06gff8025nxw6gze9v.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3g06gff8025nxw6gze9v.PNG" alt="Response create user" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, you can get a three digit code for your user Id as displayed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yoezrw53wibte9baiaj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6yoezrw53wibte9baiaj.PNG" alt="Get Code Request" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And a random code is generated for the selectd, as we can observe in the image that follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazp6yri8cy4x4day7vyu.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazp6yri8cy4x4day7vyu.PNG" alt="Get Code Response" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Kafdrop
&lt;/h3&gt;

&lt;p&gt;We can use the kafdrop UI the check if everything is okay. Kafdrop is deployed at &lt;a href="http://localhost:19000/" rel="noopener noreferrer"&gt;http://localhost:19000/&lt;/a&gt;.&lt;br&gt;
There, you will find all the brokers and topics avaliable. It should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9v29cx12cqw5cnp19sr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9v29cx12cqw5cnp19sr.PNG" alt="Kafdrop" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  KSQL CLI
&lt;/h3&gt;

&lt;p&gt;After all that, you'll be able to create your streams of data and query it using ksqlDB. On your preferential terminal, use the command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker exec -it ksqldb-cli ksql http://ksqldb-server:8088
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Creating streams
&lt;/h4&gt;

&lt;p&gt;And then you are in the ksql CLI and are free to create your streams and queries. First, let's create a stream for each one of our topics:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE STREAM stream_user (Name VARCHAR, Id VARCHAR)
  WITH (kafka_topic='demo-user', value_format='json', partitions=1);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE STREAM stream_code (Id VARCHAR, code INT)
  WITH (kafka_topic='demo-code', value_format='json', partitions=1);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Create a materialized view
&lt;/h4&gt;

&lt;p&gt;You can join the client data with the most recent randomized code. to achieve this, you must create a materialized view table, that joins both streams as seen in the ksqldb script that follows:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE currentCodeView AS
&amp;gt;   SELECT user.Name,
&amp;gt;   LATEST_BY_OFFSET(code.code) AS CurrentCode
&amp;gt;   FROM stream_code code INNER JOIN stream_user user
&amp;gt;   WITHIN 7 DAYS ON code.Id = user.Id
&amp;gt;   GROUP BY user.Name
&amp;gt;EMIT CHANGES;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h4&gt;
  
  
  Making a push query
&lt;/h4&gt;

&lt;p&gt;After that, we can query this materialized view:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM currentCodeView 
  EMIT CHANGES;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This push query keep on running until you hit cntrl+c to cancel it.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;In this tutorial it's demonstrated that in a kafka + ksqlDB enviroment, you can make SQL queries and also join on data that comes from different events, which is one of most complexities envolved with microsservices systems. And it is what ksqlDB solves by enabling SQL operations over Kafka topics.&lt;br&gt;
It's my goal to explore the possibilites allowed by this ecosystem and I hope to bring more knowledge on this topic in another articles here. Any sugestions, comments or corrections, fell free to reach me out at &lt;a href="https://www.linkedin.com/in/rmedio/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;


&lt;div class="ltag__user ltag__user__id__757631"&gt;
    &lt;a href="/jjackbauer" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F757631%2F76940fdf-9f04-4bd8-9217-d577d788ffef.jpeg" alt="jjackbauer image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/jjackbauer"&gt;Ricardo Medeiros&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/jjackbauer"&gt;Senior Cloud Software Engineer @Caylent&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;



&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://ksqldb.io/quickstart.html?_ga=2.218008467.482211024.1638022122-847939024.1633623088&amp;amp;_gac=1.142412294.1634140787.EAIaIQobChMIjOL6pt_H8wIVmcWaCh1KbwgwEAEYASAAEgLBFvD_BwE" rel="noopener noreferrer"&gt;ksqlDB Quickstart&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.confluent.io/platform/current/ksqldb/index.html#ksql-home" rel="noopener noreferrer"&gt;ksqlDB Overview&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.confluent.io/clients-confluent-kafka-dotnet/current/overview.html" rel="noopener noreferrer"&gt;Kafka .NET Client&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.ksqldb.io/en/latest/reference/sql/data-types/" rel="noopener noreferrer"&gt;ksqlDB Documentation - Data Types Overview&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.ksqldb.io/en/latest/operate-and-deploy/ksql-vs-ksqldb/" rel="noopener noreferrer"&gt;KSQL and ksqlDB&lt;/a&gt;&lt;br&gt;
&lt;a href="https://zookeeper.apache.org/" rel="noopener noreferrer"&gt;Welcome to Apache ZooKeeper&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dattell.com/data-architecture-blog/what-is-zookeeper-how-does-it-support-kafka/" rel="noopener noreferrer"&gt;What is ZooKeeper &amp;amp; How Does it Support Kafka?&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.confluent.io/what-is-apache-kafka/?utm_medium=sem&amp;amp;utm_source=google&amp;amp;utm_campaign=ch.sem_br.nonbrand_tp.prs_tgt.kafka_mt.xct_rgn.latam_lng.eng_dv.all_con.kafka-general&amp;amp;utm_term=apache%20kafka&amp;amp;creative=&amp;amp;device=c&amp;amp;placement=&amp;amp;gcli&amp;lt;br&amp;gt;%0Ad=Cj0KCQiA7oyNBhDiARIsADtGRZYDVaYjkPkoJQHNrz_xBodIq2P8ztwb8g3OTiRG_wMHXyzof1nqKEMaAoT_EALw_wcB" rel="noopener noreferrer"&gt;What is Apache Kafka®?&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ksqldb.io/" rel="noopener noreferrer"&gt;ksqlDB - The database purpose-built for stream processing applications&lt;/a&gt;&lt;br&gt;
&lt;a href="https://ksqldb.io/overview.html" rel="noopener noreferrer"&gt;An overview of ksqlDB&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.ksqldb.io/en/latest/developer-guide/ksqldb-reference/create-table-as-select/" rel="noopener noreferrer"&gt;CREATE TABLE AS SELECT&lt;/a&gt;&lt;br&gt;
&lt;a href="https://kafka-tutorials.confluent.io/join-a-stream-to-a-stream/ksql.html" rel="noopener noreferrer"&gt;How to join a stream and a stream&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.ksqldb.io/en/latest/concepts/time-and-windows-in-ksqldb-queries/" rel="noopener noreferrer"&gt;Time and Windows in ksqlDB Queries&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.ksqldb.io/en/latest/reference/sql/time/" rel="noopener noreferrer"&gt;Time operations&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>ksqldb</category>
      <category>microservices</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
