<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Roberto de Vargas Neto</title>
    <description>The latest articles on DEV Community by Roberto de Vargas Neto (@rvneto).</description>
    <link>https://dev.to/rvneto</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rvneto"/>
    <language>en</language>
    <item>
      <title>From Stream to Database: Processing Market Data with Spring Boot, Redis, and Flyway</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Wed, 01 Apr 2026 20:34:52 +0000</pubDate>
      <link>https://dev.to/rvneto/from-stream-to-database-processing-market-data-with-spring-boot-redis-and-flyway-35m9</link>
      <guid>https://dev.to/rvneto/from-stream-to-database-processing-market-data-with-spring-boot-redis-and-flyway-35m9</guid>
      <description>&lt;p&gt;Hello everyone!&lt;/p&gt;

&lt;p&gt;In my last post, we saw how our Python service collects B3 data and publishes it to Kafka. Today, we take a crucial step: &lt;strong&gt;consuming this data and making it useful for our brokerage ecosystem.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I’ll introduce the &lt;strong&gt;Broker Asset API&lt;/strong&gt;, the Java microservice responsible for managing the asset catalog, keeping prices updated, and serving this information with ultra-low latency.&lt;/p&gt;




&lt;h1&gt;
  
  
  🎯 MVP Focus (Minimum Viable Product)
&lt;/h1&gt;

&lt;p&gt;Before diving into the code, a quick disclaimer: &lt;strong&gt;we are building the foundation.&lt;/strong&gt; At this stage, the goal is to ensure the end-to-end flow works seamlessly.&lt;/p&gt;

&lt;p&gt;The focus is on delivering core value: &lt;strong&gt;making data available and performant.&lt;/strong&gt; In the future, we will revisit this service to add unit tests, refine exception handling, and increase resilience.&lt;/p&gt;




&lt;h1&gt;
  
  
  🏗️ The Pillars of the Asset API
&lt;/h1&gt;

&lt;p&gt;For this MVP, I focused on four main implementation points:&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Database Evolution with Flyway
&lt;/h2&gt;

&lt;p&gt;To ensure our MySQL schema is versioned and reproducible, I used &lt;strong&gt;Flyway&lt;/strong&gt;. We created the &lt;code&gt;assets&lt;/code&gt; table to store the ticker, name, current price, and asset status.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Technical Highlight:&lt;/strong&gt; Using an index on the &lt;code&gt;ticker&lt;/code&gt; field ensures that symbol-based queries are extremely fast.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hof4afzcqjoctd91zum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hof4afzcqjoctd91zum.png" alt=" " width="594" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Kafka Consumer: Real-Time Reactivity
&lt;/h2&gt;

&lt;p&gt;The API "listens" to the &lt;code&gt;trading-assets-market-data-v1&lt;/code&gt; topic. As soon as the Python service publishes a new price, our consumer captures the message and triggers the update flow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ordering Guarantee:&lt;/strong&gt; Since we use the ticker as the Kafka key, we process updates in the exact order they were generated.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;    &lt;span class="nd"&gt;@KafkaListener&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;topics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"trading-assets-market-data-v1"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;groupId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"broker-asset-api"&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;consume&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;AssetMarketDataDTO&lt;/span&gt; &lt;span class="n"&gt;dto&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Received market data for: {}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dto&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getTicker&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

        &lt;span class="n"&gt;assetService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;updateAsset&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dto&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Service Layer: Hybrid Persistence (SQL + Redis)
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;AssetService&lt;/code&gt; is where the business logic resides. Upon receiving a new price, it performs two operations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MySQL Persistence:&lt;/strong&gt; Updates the asset record (or creates a new one) to ensure data consistency.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6worf9f5klclv7ibhdq3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6worf9f5klclv7ibhdq3.png" alt=" " width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Cache Update (Redis):&lt;/strong&gt; The price is sent to a Redis cache with the key &lt;code&gt;market:price:{ticker}&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;This allows other system components to query prices instantly without overloading the relational database.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdazyfv5jb4z4w0h34tec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdazyfv5jb4z4w0h34tec.png" alt=" " width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Controller: Serving the Information
&lt;/h2&gt;

&lt;p&gt;We created REST endpoints so that other services or the front-end can query the asset catalog:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;GET /api/v1/assets&lt;/code&gt;: Lists all available active assets.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GET /api/v1/assets/{ticker}&lt;/code&gt;: Returns details and the updated price for a specific asset.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kkm820ejhaav9unhkkj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kkm820ejhaav9unhkkj.png" alt=" " width="689" height="315"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  🛠️ What’s Next?
&lt;/h1&gt;

&lt;p&gt;The Asset microservice is the foundation that ensures the system knows "what" is being traded and "for how much." Since we are following an MVP strategy, the focus was on establishing this data contract and basic persistence.&lt;/p&gt;

&lt;p&gt;But an asset alone doesn't make a brokerage. It needs to belong to someone.&lt;/p&gt;

&lt;p&gt;In the next post, we’ll increase the complexity and talk about the &lt;strong&gt;Broker Wallet API&lt;/strong&gt;. We will explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managing user financial balances.&lt;/li&gt;
&lt;li&gt;Linking asset custody to customer wallets.&lt;/li&gt;
&lt;li&gt;How the system reflects market variations in the investor's equity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What do you think of this step-by-step approach to building an ecosystem?&lt;/strong&gt; Let me know your thoughts in the comments!&lt;/p&gt;




&lt;h1&gt;
  
  
  🔎 About the series
&lt;/h1&gt;

&lt;p&gt;⬅️ Previous Post: &lt;a href="https://dev.to/rvneto/tooling-tips-visualizing-your-data-in-mongodb-and-kafka-1k1p"&gt;Tooling Tips&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;📘​ Series Index: &lt;a href="https://dev.to/rvneto/series-roadmap-building-a-stock-brokerage-simulator-with-microservices-kgh"&gt;Series Guide&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>java</category>
      <category>kafka</category>
      <category>flyway</category>
      <category>mysql</category>
    </item>
    <item>
      <title>Do Stream para o Banco: Processando Market Data com Spring Boot, Redis e Flyway</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Sat, 28 Mar 2026 14:44:22 +0000</pubDate>
      <link>https://dev.to/rvneto/do-stream-para-o-banco-processando-market-data-com-spring-boot-redis-e-flyway-19d2</link>
      <guid>https://dev.to/rvneto/do-stream-para-o-banco-processando-market-data-com-spring-boot-redis-e-flyway-19d2</guid>
      <description>&lt;p&gt;Olá, pessoal!&lt;/p&gt;

&lt;p&gt;No último post, vimos como nosso serviço em Python coleta dados da B3 e os publica no Kafka. Hoje, vamos dar um passo crucial: &lt;strong&gt;consumir esses dados e torná-los úteis para a nossa corretora.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vou apresentar a &lt;strong&gt;Broker Asset API&lt;/strong&gt;, o microserviço Java responsável por gerenciar o catálogo de ativos, manter os preços atualizados e servir essas informações com baixíssima latência.&lt;/p&gt;




&lt;h1&gt;
  
  
  🎯 Foco no MVP (Minimum Viable Product)
&lt;/h1&gt;

&lt;p&gt;Antes de mergulharmos no código, vale um disclaimer: &lt;strong&gt;estamos construindo a base&lt;/strong&gt;. Nesta fase, o objetivo é garantir que o fluxo "ponta a ponta" funcione. &lt;/p&gt;

&lt;p&gt;O foco agora é a entrega de valor central: &lt;strong&gt;ter o dado disponível e performático.&lt;/strong&gt; Futuramente, voltaremos a este serviço para adicionar testes unitários, refinamento de exceções e maior resiliência.&lt;/p&gt;




&lt;h1&gt;
  
  
  🏗️ Os Pilares da Asset API
&lt;/h1&gt;

&lt;p&gt;Para este MVP, foquei em quatro pontos principais de implementação:&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Evolução de Banco de Dados com Flyway
&lt;/h2&gt;

&lt;p&gt;Para garantir que o esquema do nosso banco MySQL seja versionado e replicável, utilizei o &lt;strong&gt;Flyway&lt;/strong&gt;. Criamos a tabela &lt;code&gt;assets&lt;/code&gt; que armazena o ticker, nome, preço atual e o status do ativo.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Destaque técnico:&lt;/strong&gt; O uso de um índice no campo &lt;code&gt;ticker&lt;/code&gt; garante que as consultas por símbolo sejam extremamente rápidas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flapfmadqwu9nj655xg7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flapfmadqwu9nj655xg7y.png" alt=" " width="594" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Consumidor Kafka: Reatividade em Tempo Real
&lt;/h2&gt;

&lt;p&gt;A API "escuta" o tópico &lt;code&gt;trading-assets-market-data-v1&lt;/code&gt;. Assim que o serviço Python publica um novo preço, nosso consumidor captura a mensagem e inicia o fluxo de atualização.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Garantia de Ordem:&lt;/strong&gt; Como usamos o ticker como chave no Kafka, processamos as atualizações na ordem correta em que foram geradas.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;    &lt;span class="nd"&gt;@KafkaListener&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;topics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"trading-assets-market-data-v1"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;groupId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"broker-asset-api"&lt;/span&gt;
    &lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;consume&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;AssetMarketDataDTO&lt;/span&gt; &lt;span class="n"&gt;dto&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Received market data for: {}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dto&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getTicker&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;

        &lt;span class="n"&gt;assetService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;updateAsset&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dto&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Service Layer: Persistência Híbrida (SQL + Redis)
&lt;/h2&gt;

&lt;p&gt;O &lt;code&gt;AssetService&lt;/code&gt; é onde a lógica de negócio reside. Ao receber um novo preço, ele realiza duas operações:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Persistência no MySQL:&lt;/strong&gt; Atualiza o registro do ativo (ou cria um novo, caso não exista) para garantir a consistência dos dados.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2t0vf0y31ru753zsxw66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2t0vf0y31ru753zsxw66.png" alt=" " width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Atualização de Cache (Redis):&lt;/strong&gt; O preço é enviado para um cache no Redis com a chave &lt;code&gt;market:price:{ticker}&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Isso permite que outros componentes do sistema consultem o preço instantaneamente sem sobrecarregar o banco de dados relacional.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firgezj8zgof51ae58c84.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Firgezj8zgof51ae58c84.png" alt=" " width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Controller: Disponibilizando a Informação
&lt;/h2&gt;

&lt;p&gt;Criamos endpoints REST para que outros serviços ou o front-end consultem o catálogo de ativos:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;GET /api/v1/assets&lt;/code&gt;: Lista todos os ativos ativos disponíveis.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GET /api/v1/assets/{ticker}&lt;/code&gt;: Retorna os detalhes e o preço atualizado de um ativo específico.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkbeewlxz98fz0lzksoe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvkbeewlxz98fz0lzksoe.png" alt=" " width="689" height="315"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  🛠️ O que vem a seguir?
&lt;/h1&gt;

&lt;p&gt;Este microserviço de Assets é a fundação que garante que o sistema saiba "o que" está sendo negociado e "por quanto". Como estamos seguindo a estratégia de &lt;strong&gt;MVP&lt;/strong&gt;, o foco foi estabelecer esse contrato de dados e a persistência básica.&lt;/p&gt;

&lt;p&gt;Mas um ativo sozinho não faz uma corretora. Ele precisa pertencer a alguém. &lt;/p&gt;

&lt;p&gt;No próximo post, vamos subir um degrau na complexidade e falar sobre a &lt;strong&gt;Broker Wallet API&lt;/strong&gt;. Vamos entender como:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Gerenciar o saldo financeiro do usuário.&lt;/li&gt;
&lt;li&gt;Vincular a custódia dos ativos às carteiras dos clientes.&lt;/li&gt;
&lt;li&gt;Como o sistema se prepara para refletir as variações de mercado no patrimônio do investidor.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;O que achou dessa abordagem de construir o ecossistema peça por peça?&lt;/strong&gt; Deixe suas dúvidas nos comentários!&lt;/p&gt;




&lt;h1&gt;
  
  
  ​🔎​ Sobre a série
&lt;/h1&gt;

&lt;p&gt;⬅️ Post Anterior: &lt;a href="https://dev.to/rvneto/dica-de-ferramentas-como-visualizar-seus-dados-no-mongodb-e-kafka-3hc"&gt;Dica de ferramentas&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;📘​ Índice da Série: &lt;a href="https://dev.to/rvneto/guia-da-serie-construindo-um-simulador-de-corretora-com-microservicos-1kef"&gt;Guia da Série&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rvneto/trading-broker-asset" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>java</category>
      <category>kafka</category>
      <category>flyway</category>
      <category>mysql</category>
    </item>
    <item>
      <title>Tooling Tips: Visualizing Your Data in MongoDB and Kafka</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Sun, 15 Mar 2026 12:36:10 +0000</pubDate>
      <link>https://dev.to/rvneto/tooling-tips-visualizing-your-data-in-mongodb-and-kafka-1k1p</link>
      <guid>https://dev.to/rvneto/tooling-tips-visualizing-your-data-in-mongodb-and-kafka-1k1p</guid>
      <description>&lt;p&gt;Hello, everyone!&lt;/p&gt;

&lt;p&gt;When developing distributed systems like &lt;strong&gt;My Broker B3&lt;/strong&gt;, you often find yourself "working blindly" if you don't have the right tools to validate what’s happening inside your containers.&lt;/p&gt;

&lt;p&gt;Today, I want to share two essential tools I’m using to ensure that the market data fetched with Python is correctly reaching the database and the message broker.&lt;/p&gt;




&lt;h1&gt;
  
  
  1. MongoDB Compass: The Official MongoDB GUI
&lt;/h1&gt;

&lt;p&gt;Even though MongoDB is running in an isolated environment via Docker, &lt;strong&gt;MongoDB Compass&lt;/strong&gt; is the tool I use to "see" the documents saved by the Market Data microservice.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What it solves:&lt;/strong&gt; It allows me to validate if the data mapping in Python worked correctly and if fields like &lt;code&gt;created_at&lt;/code&gt; and &lt;code&gt;price&lt;/code&gt; are in the expected format.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;How to connect to Docker:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Connection String:&lt;/strong&gt; &lt;code&gt;mongodb://localhost:27017&lt;/code&gt; (Ensure that port &lt;code&gt;27017&lt;/code&gt; matches the one mapped in your &lt;code&gt;docker-compose.yml&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Pro Tip:&lt;/strong&gt; In Compass, you can visualize the &lt;code&gt;price_history&lt;/code&gt; collection in either tabular or JSON format, which makes quick inspections much easier.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  📸 Viewing in document mode:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvzm9csfq73rhcgqx0o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvzm9csfq73rhcgqx0o0.png" alt=" " width="677" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  📸 Table view:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgafuh5tumnw92egjo0a6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgafuh5tumnw92egjo0a6.png" alt=" " width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  2. Offset Explorer (Formerly Kafka Tool)
&lt;/h1&gt;

&lt;p&gt;Kafka can be intimidating to manage via the command line (CLI). &lt;strong&gt;Offset Explorer&lt;/strong&gt; is a desktop client that makes visualizing messages within your topics incredibly easy.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What it solves:&lt;/strong&gt; It allows me to monitor, in real-time, the messages arriving in the &lt;code&gt;trading-assets-market-data-v1&lt;/code&gt; topic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration for the Docker environment:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Name:&lt;/strong&gt; MyBrokerKafka (or any name you prefer).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zookeeper/Broker:&lt;/strong&gt; &lt;code&gt;localhost:9092&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwepegy0fv647khuxgzu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwepegy0fv647khuxgzu2.png" alt=" " width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Crucial Tip:&lt;/strong&gt; To read the message content sent as JSON from Python, go to the tool's settings and change the &lt;strong&gt;Content Type&lt;/strong&gt; (for both Key and Value) from &lt;em&gt;Byte Array&lt;/em&gt; to &lt;strong&gt;String&lt;/strong&gt;. This ensures the data is displayed in a human-readable format.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16nak8n2k2h0eo0u15q1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16nak8n2k2h0eo0u15q1.png" alt=" " width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  📸 Visualizando as mensagens
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkx57pwcpcep7clk1fy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkx57pwcpcep7clk1fy0.png" alt=" " width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  💡 Why is this important?
&lt;/h1&gt;

&lt;p&gt;Mastering these support tools significantly speeds up the development cycle. In our case, Offset Explorer was essential to validate one of the project's most important decisions: using the &lt;strong&gt;Ticker as the message key&lt;/strong&gt; in Kafka to ensure per-asset ordering.&lt;/p&gt;




&lt;h1&gt;
  
  
  🚀 Conclusion
&lt;/h1&gt;

&lt;p&gt;Having full visibility over your data is the first step toward building resilient systems. In the next post of the main series, we will head back to the Java ecosystem to start consuming these events!&lt;/p&gt;

&lt;p&gt;What about you? What tools do you use to debug your distributed systems? Let me know in the comments!&lt;/p&gt;




&lt;h1&gt;
  
  
  🔎 About the series
&lt;/h1&gt;

&lt;p&gt;⬅️ Previous Post: &lt;a href="https://dev.to/rvneto/market-data-integrator-consuming-real-time-data-with-python-mongodb-and-kafka-8k1"&gt;Market Data Integrator&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;📘​ Series Index: &lt;a href="https://dev.to/rvneto/series-roadmap-building-a-stock-brokerage-simulator-with-microservices-kgh"&gt;Series Guide&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mongodb</category>
      <category>kafka</category>
      <category>tooling</category>
      <category>developer</category>
    </item>
    <item>
      <title>Dica de Ferramentas: Como Visualizar seus Dados no MongoDB e Kafka</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Sun, 15 Mar 2026 12:02:53 +0000</pubDate>
      <link>https://dev.to/rvneto/dica-de-ferramentas-como-visualizar-seus-dados-no-mongodb-e-kafka-3hc</link>
      <guid>https://dev.to/rvneto/dica-de-ferramentas-como-visualizar-seus-dados-no-mongodb-e-kafka-3hc</guid>
      <description>&lt;p&gt;Olá, pessoal!&lt;/p&gt;

&lt;p&gt;Ao desenvolver sistemas distribuídos como o &lt;strong&gt;My Broker B3&lt;/strong&gt;, muitas vezes trabalhamos "às cegas" se não tivermos as ferramentas certas para validar o que acontece dentro dos nossos containers.&lt;/p&gt;

&lt;p&gt;Hoje, quero compartilhar duas ferramentas essenciais que estou utilizando para garantir que os dados de mercado que busco com Python estejam chegando corretamente ao banco de dados e ao broker de mensagens.&lt;/p&gt;




&lt;h1&gt;
  
  
  1. MongoDB Compass: O GUI Oficial do MongoDB
&lt;/h1&gt;

&lt;p&gt;Embora o MongoDB esteja rodando de forma isolada no Docker, o &lt;strong&gt;MongoDB Compass&lt;/strong&gt; é a ferramenta que utilizo para "enxergar" os documentos salvos pelo microserviço de Market Data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;O que ele resolve:&lt;/strong&gt; Permite validar se o mapeamento de dados no Python funcionou e se campos como &lt;code&gt;created_at&lt;/code&gt; e &lt;code&gt;price&lt;/code&gt; estão no formato esperado.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Como conectar ao Docker:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Connection String:&lt;/strong&gt; &lt;code&gt;mongodb://localhost:27017&lt;/code&gt; (Certifique-se de que a porta &lt;code&gt;27017&lt;/code&gt; é a que você mapeou no seu &lt;code&gt;docker-compose.yml&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Dica de uso:&lt;/strong&gt; No Compass, você pode visualizar o histórico de preços na coleção &lt;code&gt;price_history&lt;/code&gt; de forma tabular ou em formato JSON, o que facilita muito a conferência rápida.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  📸 Visualização em modo documento:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvzm9csfq73rhcgqx0o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flvzm9csfq73rhcgqx0o0.png" alt=" " width="677" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  📸 Visualização em modo tabela:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgafuh5tumnw92egjo0a6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgafuh5tumnw92egjo0a6.png" alt=" " width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  2. Offset Explorer (Antigo Kafka Tool)
&lt;/h1&gt;

&lt;p&gt;O Kafka pode ser intimidador para gerenciar via linha de comando. O &lt;strong&gt;Offset Explorer&lt;/strong&gt; é um cliente desktop que facilita absurdamente a visualização das mensagens dentro dos tópicos.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;O que ele resolve:&lt;/strong&gt; Permite ver em tempo real as mensagens chegando no tópico &lt;code&gt;trading-assets-market-data-v1&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuração para o ambiente Docker:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluster Name:&lt;/strong&gt; MyBrokerKafka (ou o nome de sua preferência).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zookeeper/Broker:&lt;/strong&gt; &lt;code&gt;localhost:9092&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwepegy0fv647khuxgzu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwepegy0fv647khuxgzu2.png" alt=" " width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dica Crucial:&lt;/strong&gt; Para conseguir ler o conteúdo das mensagens que enviamos como JSON via Python, vá nas configurações da ferramenta e altere o &lt;strong&gt;Content Type&lt;/strong&gt; (Key e Value) de &lt;em&gt;Byte Array&lt;/em&gt; para &lt;strong&gt;String&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16nak8n2k2h0eo0u15q1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16nak8n2k2h0eo0u15q1.png" alt=" " width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  📸 Visualizando as mensagens
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkx57pwcpcep7clk1fy0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkx57pwcpcep7clk1fy0.png" alt=" " width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  💡 Por que isso é importante?
&lt;/h1&gt;

&lt;p&gt;Dominar essas ferramentas de suporte acelera muito o ciclo de desenvolvimento. No nosso caso, o &lt;strong&gt;Offset Explorer&lt;/strong&gt; foi fundamental para validar uma das decisões mais importantes do projeto: o uso do &lt;strong&gt;Ticker como chave (key)&lt;/strong&gt; da mensagem no Kafka para garantir a ordenação dos preços por ativo.&lt;/p&gt;




&lt;h1&gt;
  
  
  🚀 Conclusão
&lt;/h1&gt;

&lt;p&gt;Ter visibilidade sobre seus dados é o primeiro passo para construir sistemas resilientes. No próximo post da série principal, voltaremos ao ecossistema Java para começar a consumir esses eventos!&lt;/p&gt;

&lt;p&gt;Gostou da dica? Quais ferramentas você costuma usar para depurar seus sistemas distribuídos?&lt;/p&gt;




&lt;h1&gt;
  
  
  ​🔎​ Sobre a série
&lt;/h1&gt;

&lt;p&gt;⬅️ Post Anterior: &lt;a href="https://dev.to/rvneto/https://dev.to/rvneto/integrador-de-market-data-consumindo-dados-reais-com-python-mongodb-e-kafka-4c2i"&gt;Integrador de Market Data&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;📘​ Índice da Série: &lt;a href="https://dev.to/rvneto/guia-da-serie-construindo-um-simulador-de-corretora-com-microservicos-1kef"&gt;Guia da Série&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mongodb</category>
      <category>kafka</category>
      <category>tooling</category>
      <category>developer</category>
    </item>
    <item>
      <title>Market Data Integrator: Consuming Real-Time Data with Python, MongoDB, and Kafka</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Sat, 14 Mar 2026 20:48:15 +0000</pubDate>
      <link>https://dev.to/rvneto/market-data-integrator-consuming-real-time-data-with-python-mongodb-and-kafka-8k1</link>
      <guid>https://dev.to/rvneto/market-data-integrator-consuming-real-time-data-with-python-mongodb-and-kafka-8k1</guid>
      <description>&lt;p&gt;Hello, everyone!&lt;/p&gt;

&lt;p&gt;Continuing the &lt;strong&gt;My Broker B3&lt;/strong&gt; series, today let's talk about the component that feeds the entire ecosystem with real-time data from the Brazilian financial market: The &lt;strong&gt;Broker Market Data API&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This Python-based microservice acts as an ingestor, connecting the external world (Brappi API) to our internal infrastructure.&lt;/p&gt;




&lt;h1&gt;
  
  
  🏗️ The Solution and Data Flow
&lt;/h1&gt;

&lt;p&gt;The main objective is to ensure that asset prices are always up to date for other services. The data flow was designed in three main steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scheduled Ingestion:&lt;/strong&gt; The service iterates through a Watchlist of 50 assets, including Blue Chips (such as &lt;code&gt;PETR4&lt;/code&gt; and &lt;code&gt;VALE3&lt;/code&gt;), REITs (FIIs), and ETFs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Historical Persistence (MongoDB):&lt;/strong&gt; Before any processing, the complete payload is saved in &lt;strong&gt;MongoDB&lt;/strong&gt; (&lt;code&gt;broker_market_data_db&lt;/code&gt;). This ensures we have an audit trail and data for future analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event Streaming (Kafka):&lt;/strong&gt; The updated price is published to the &lt;code&gt;trading-assets-market-data-v1&lt;/code&gt; topic. This allows any other microservice to react to these changes in real time.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h1&gt;
  
  
  🛠️ Implementation Details
&lt;/h1&gt;

&lt;p&gt;I chose &lt;strong&gt;Python 3.12&lt;/strong&gt; for this component due to its efficiency in handling HTTP requests and its excellent ecosystem for data integration.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Importance of the Kafka Message Key
&lt;/h3&gt;

&lt;p&gt;A vital technical decision in this service was using the &lt;strong&gt;asset ticker as the message key&lt;/strong&gt; when publishing to Kafka.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why is this important?&lt;/strong&gt;&lt;br&gt;
Kafka guarantees message ordering &lt;strong&gt;only within a single partition&lt;/strong&gt;. By setting the ticker (e.g., &lt;code&gt;PETR4&lt;/code&gt;) as the key, Kafka ensures that all messages for that specific asset are always routed to the &lt;strong&gt;same partition&lt;/strong&gt;. This guarantees that any consumer will process price updates in the exact order they occurred, preventing a race condition where an older price might be processed after a newer one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Highlights:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rate Limiting:&lt;/strong&gt; Implemented a &lt;code&gt;time.sleep(0.5)&lt;/code&gt; between API calls to respect the limits of the Brapi free tier and avoid throttling.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Mapping:&lt;/strong&gt; The payload is transformed into a standardized format (ticker, price, volume, timestamp) before transmission.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Snippet of the Kafka production with keys in main.py
&lt;/span&gt;&lt;span class="n"&gt;producer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;produce&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;TOPIC_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ticker&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;# Ensures ordering per asset
&lt;/span&gt;    &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;callback&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;delivery_report&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  ✅ Validating the Execution
&lt;/h1&gt;

&lt;p&gt;To ensure the integration is working as expected, I validated two main output points:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MongoDB:&lt;/strong&gt; I verified the &lt;code&gt;price_history&lt;/code&gt; collection in the &lt;code&gt;market_data_db&lt;/code&gt;, confirming that documents are being inserted with the correct created_at timestamp.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5ijt91kbqn4xqxxuizk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5ijt91kbqn4xqxxuizk.png" alt=" " width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Kafka:&lt;/strong&gt; Using the management UI, I confirmed that messages are arriving with the correct keys and values in the &lt;code&gt;trading-assets-market-data-v1&lt;/code&gt; topic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4z03sd15m4yf7dfqgej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4z03sd15m4yf7dfqgej.png" alt=" " width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  🚀 Conclusion
&lt;/h1&gt;

&lt;p&gt;With this service running, our simulator now "sees" the market in real time. The next challenge is to consume these events from within the Java microservices to update user portfolios and trigger order matching.&lt;/p&gt;

&lt;p&gt;Do you have any questions about the ingestion strategy or using Kafka with Python? Let's discuss in the comments!&lt;/p&gt;




&lt;h1&gt;
  
  
  🔎 About the series
&lt;/h1&gt;

&lt;p&gt;⬅️ Previous Post: &lt;a href="https://dev.to/rvneto/infrastructure-as-code-deploying-a-financial-ecosystem-with-docker-compose-2p3d"&gt;Infrastructure as Code&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;📘​ Series Index: &lt;a href="https://dev.to/rvneto/series-roadmap-building-a-stock-brokerage-simulator-with-microservices-kgh"&gt;Series Guide&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rvneto/trading-broker-market-data" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>api</category>
      <category>dataengineering</category>
      <category>microservices</category>
      <category>python</category>
    </item>
    <item>
      <title>Integrador de Market Data: Consumindo Dados Reais com Python, MongoDB e Kafka</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Sat, 14 Mar 2026 20:08:02 +0000</pubDate>
      <link>https://dev.to/rvneto/integrador-de-market-data-consumindo-dados-reais-com-python-mongodb-e-kafka-4c2i</link>
      <guid>https://dev.to/rvneto/integrador-de-market-data-consumindo-dados-reais-com-python-mongodb-e-kafka-4c2i</guid>
      <description>&lt;p&gt;Olá, pessoal!&lt;/p&gt;

&lt;p&gt;Dando continuidade à série &lt;strong&gt;My Broker B3&lt;/strong&gt;, hoje vamos falar sobre o componente que alimenta todo o ecossistema com dados reais do mercado financeiro brasileiro: o &lt;strong&gt;Broker Market Data API&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Este microserviço em Python atua como um &lt;em&gt;ingestor&lt;/em&gt;, conectando o mundo externo (API Brappi) à nossa infraestrutura interna.&lt;/p&gt;




&lt;h1&gt;
  
  
  🏗️ A Solução e o Fluxo de Dados
&lt;/h1&gt;

&lt;p&gt;O objetivo aqui é garantir que os preços dos ativos estejam sempre atualizados para os outros serviços. O fluxo foi desenhado em três etapas principais:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Ingestão Agendada&lt;/strong&gt;: O serviço percorre uma &lt;em&gt;Watchlist&lt;/em&gt; de 50 ativos (incluindo Blue Chips como PETR4 e VALE3, além de FIIs e ETFs).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Persistência Histórica (MongoDB)&lt;/strong&gt;: Antes de qualquer processamento, o payload completo é salvo no &lt;strong&gt;MongoDB&lt;/strong&gt;. Isso garante que tenhamos uma trilha de auditoria e dados para análises futuras.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Streaming de Eventos (Kafka)&lt;/strong&gt;: O preço atualizado é publicado no tópico &lt;code&gt;trading-assets-market-data-v1&lt;/code&gt;. Isso permite que qualquer outro microserviço reaja a essa mudança em tempo real.&lt;/li&gt;
&lt;/ol&gt;




&lt;h1&gt;
  
  
  🛠️ Detalhes da Implementação
&lt;/h1&gt;

&lt;p&gt;Escolhi &lt;strong&gt;Python 3.12&lt;/strong&gt; pela sua agilidade em lidar com requisições HTTP e integração com drivers de dados. &lt;/p&gt;

&lt;h3&gt;
  
  
  A importância da Chave no Kafka (Message Key)
&lt;/h3&gt;

&lt;p&gt;Uma decisão técnica vital neste serviço foi o uso do &lt;strong&gt;ticker do ativo como chave (key)&lt;/strong&gt; da mensagem enviada ao Kafka. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Por que isso é importante?&lt;/strong&gt;&lt;br&gt;
O Kafka garante a ordenação das mensagens apenas dentro de uma partição. Ao definir o ticker (ex: &lt;code&gt;PETR4&lt;/code&gt;) como chave, o Kafka assegura que todas as mensagens desse ativo caiam sempre na &lt;strong&gt;mesma partição&lt;/strong&gt;. Isso garante que qualquer consumidor lerá os eventos na ordem exata em que ocorreram, evitando que um preço antigo seja processado após um preço mais recente.&lt;/p&gt;

&lt;h3&gt;
  
  
  Destaques do Código:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rate Limiting&lt;/strong&gt;: Como utilizo o plano gratuito da Brapi, o código implementa um &lt;code&gt;time.sleep(0.5)&lt;/code&gt; entre as chamadas para respeitar os limites da API e evitar &lt;em&gt;throttling&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Mapping&lt;/strong&gt;: O payload é transformado para um formato padronizado antes de ser enviado para o Kafka, garantindo que os serviços consumidores recebam apenas o necessário (ticker, price, volume e timestamp).
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Trecho do mapeamento de dados no main.py
&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ticker&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;symbol&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;price&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;regularMarketPrice&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;volume&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;regularMarketVolume&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;updated_at&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;regularMarketTime&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h1&gt;
  
  
  ✅ Validando a Execução
&lt;/h1&gt;

&lt;p&gt;Para garantir que a integração está funcionando, validei os dois pontos de saída:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MongoDB:&lt;/strong&gt; Verifiquei a coleção &lt;code&gt;price_history&lt;/code&gt; no banco &lt;code&gt;market_data_db&lt;/code&gt;, onde os documentos estão sendo inseridos com o campo &lt;code&gt;created_at&lt;/code&gt; gerado pelo nosso repositório.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fole7nnhp5ylcu94swmsq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fole7nnhp5ylcu94swmsq.png" alt=" " width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Kafka:&lt;/strong&gt; Através da interface de gestão, confirmei a chegada das mensagens com as chaves (tickers) e valores corretos no tópico.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq25i2boas29xcrxeptc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq25i2boas29xcrxeptc5.png" alt=" " width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  🚀 Conclusão
&lt;/h1&gt;

&lt;p&gt;Com este serviço rodando, nosso simulador agora "enxerga" o mercado em tempo real. O próximo desafio é consumir esses eventos de dentro das APIs Java para atualizar as carteiras dos usuários.&lt;/p&gt;

&lt;p&gt;Ficou com alguma dúvida sobre a estratégia de ingestão ou sobre o uso do Kafka com Python? Deixe nos comentários!&lt;/p&gt;

&lt;p&gt;Para acompanhar essa série &lt;a href="https://dev.to/rvneto/guia-da-serie-construindo-um-simulador-de-corretora-com-microservicos-1kef"&gt;clique aqui&lt;/a&gt;.&lt;/p&gt;




&lt;h1&gt;
  
  
  ​🔎​ Sobre a série
&lt;/h1&gt;

&lt;p&gt;⬅️ Post Anterior: &lt;a href="https://dev.to/rvneto/infraestrutura-como-codigo-subindo-o-ecossistema-financeiro-com-docker-compose-ojb"&gt;Infraestrutura como Código&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;📘​ Índice da Série: &lt;a href="https://dev.to/rvneto/guia-da-serie-construindo-um-simulador-de-corretora-com-microservicos-1kef"&gt;Guia da Série&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rvneto/trading-broker-market-data" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>api</category>
      <category>database</category>
      <category>microservices</category>
      <category>python</category>
    </item>
    <item>
      <title>Series Roadmap: Building a Stock Brokerage Simulator with Microservices</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Sat, 14 Mar 2026 14:12:01 +0000</pubDate>
      <link>https://dev.to/rvneto/series-roadmap-building-a-stock-brokerage-simulator-with-microservices-kgh</link>
      <guid>https://dev.to/rvneto/series-roadmap-building-a-stock-brokerage-simulator-with-microservices-kgh</guid>
      <description>&lt;p&gt;Welcome to the official index for the &lt;strong&gt;My Broker B3&lt;/strong&gt; series. This post serves as a central hub where I organize all the articles about this financial ecosystem's development in the ideal reading order.&lt;/p&gt;

&lt;p&gt;This project is a hands-on lab where I apply software engineering, distributed systems, and messaging to simulate the integration between a Brokerage and the Stock Exchange.&lt;/p&gt;




&lt;h1&gt;
  
  
  🚀 Series Articles
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/rvneto/building-a-microservices-ecosystem-stock-brokerage-simulator-my-broker-b3-45bj"&gt;Project Overview&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduction to the macro architecture, tech stack (Java, Python, Kafka, RabbitMQ), and the simulator's goals.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/rvneto/infrastructure-as-code-deploying-a-financial-ecosystem-with-docker-compose-2p3d"&gt;Infrastructure with Docker Compose&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How I deployed 12 containers (SQL, NoSQL, Cache, and Messaging) ensuring domain isolation and &lt;code&gt;.env&lt;/code&gt; best practices.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/rvneto/market-data-integrator-consuming-real-time-data-with-python-mongodb-and-kafka-8k1"&gt;Market Data: The Python, MongoDB, and Kafka Integrator&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How I built the ingestion service that consumes the Brappi API, ensures historical persistence in MongoDB, and uses Kafka keys to guarantee message ordering per asset.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/rvneto/tooling-tips-visualizing-your-data-in-mongodb-and-kafka-1k1p"&gt;Tooling Tips: MongoDB Compass and Offset Explorer&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to configure and use visual tools to validate MongoDB persistence and Kafka message streams during development.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/rvneto/from-stream-to-database-processing-market-data-with-spring-boot-redis-and-flyway-35m9"&gt;From Stream to Database: Processing Market Data with Spring Boot, Redis, and Flyway&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first Java microservice in the ecosystem: how to consume Kafka data, version the database with Flyway, and implement a high-performance Redis cache to serve asset prices.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;... more articles will be added as the development progresses!&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Connect with me:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>learning</category>
      <category>portfolio</category>
      <category>microservices</category>
      <category>backend</category>
    </item>
    <item>
      <title>Guia da Série: Construindo um Simulador de Corretora com Microserviços</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Sat, 14 Mar 2026 13:59:02 +0000</pubDate>
      <link>https://dev.to/rvneto/guia-da-serie-construindo-um-simulador-de-corretora-com-microservicos-1kef</link>
      <guid>https://dev.to/rvneto/guia-da-serie-construindo-um-simulador-de-corretora-com-microservicos-1kef</guid>
      <description>&lt;p&gt;Seja bem-vindo ao índice oficial da série &lt;strong&gt;My Broker B3&lt;/strong&gt;. Aqui você encontrará todos os artigos publicados sobre o desenvolvimento deste ecossistema financeiro, organizados na ordem ideal de leitura.&lt;/p&gt;

&lt;p&gt;Este projeto é um laboratório prático onde aplico engenharia de software, sistemas distribuídos e mensageria para simular a integração entre uma Corretora e a B3.&lt;/p&gt;




&lt;h1&gt;
  
  
  🚀 Artigos da Série
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/rvneto/construindo-um-ecossistema-de-microservicos-simulador-de-corretora-de-valores-my-broker-b3-1g4n"&gt;Visão Geral do Projeto&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apresentação da arquitetura macro, stack tecnológica (Java, Python, Kafka, RabbitMQ) e os objetivos do simulador.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/rvneto/infraestrutura-como-codigo-subindo-o-ecossistema-financeiro-com-docker-compose-ojb"&gt;Infraestrutura com Docker Compose&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Como subi 14 containers (SQL, NoSQL, Cache e Mensageria) garantindo isolamento de domínios e boas práticas com &lt;code&gt;.env&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/rvneto/integrador-de-market-data-consumindo-dados-reais-com-python-mongodb-e-kafka-4c2i"&gt;Market Data: O Integrador Python, MongoDB e Kafka&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Como construí o serviço de ingestão que consome a API Brappi, garante a persistência histórica no MongoDB e utiliza chaves no Kafka para assegurar a ordenação das cotações por ativo.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/rvneto/dica-de-ferramentas-como-visualizar-seus-dados-no-mongodb-e-kafka-3hc"&gt;Dicas de Ferramentas: MongoDB Compass e Offset Explorer&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Como configurar e utilizar ferramentas visuais para validar a persistência no MongoDB e o fluxo de mensagens no Kafka durante o desenvolvimento.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/rvneto/do-stream-para-o-banco-processando-market-data-com-spring-boot-redis-e-flyway-19d2"&gt;Do Stream para o Banco: Processando Market Data com Spring Boot, Redis e Flyway&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;O primeiro microserviço Java do ecossistema: como consumir dados do Kafka, versionar o banco com Flyway e implementar um cache de alta performance com Redis para servir os preços dos ativos.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;... mais artigos serão adicionados conforme o desenvolvimento avança!&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Minhas Redes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>learning</category>
      <category>portfolio</category>
      <category>microservices</category>
      <category>backend</category>
    </item>
    <item>
      <title>Infrastructure as Code: Deploying a Financial Ecosystem with Docker Compose</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Sat, 14 Mar 2026 13:51:38 +0000</pubDate>
      <link>https://dev.to/rvneto/infrastructure-as-code-deploying-a-financial-ecosystem-with-docker-compose-2p3d</link>
      <guid>https://dev.to/rvneto/infrastructure-as-code-deploying-a-financial-ecosystem-with-docker-compose-2p3d</guid>
      <description>&lt;p&gt;Hello, everyone! 👋&lt;/p&gt;

&lt;p&gt;In my &lt;a href="https://dev.to/rvneto/building-a-microservices-ecosystem-stock-brokerage-simulator-my-broker-b3-45bj"&gt;previous post&lt;/a&gt;, I presented an overview of &lt;strong&gt;My Broker B3&lt;/strong&gt;. Today, let's "open the hood" and talk about the foundation that supports this entire ecosystem: the &lt;strong&gt;infrastructure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For a microservices system, manually configuring each database, message broker, and monitoring tool would be a productivity nightmare. That’s why I used &lt;strong&gt;Docker Compose&lt;/strong&gt; to create a local environment that replicates the needs of a highly complex distributed system.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏗️ Domain Isolation Strategy
&lt;/h2&gt;

&lt;p&gt;A central design decision for this project was &lt;strong&gt;data isolation by domain&lt;/strong&gt;. Instead of a single monolithic database, each microservice has its own instance or logical base:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Relational Persistence (SQL):&lt;/strong&gt; I use MySQL 8.0 for the Brokerage domains (&lt;code&gt;Identity&lt;/code&gt;, &lt;code&gt;Wallet&lt;/code&gt;, &lt;code&gt;Order&lt;/code&gt;, and &lt;code&gt;Asset&lt;/code&gt;) and &lt;strong&gt;PostgreSQL 15&lt;/strong&gt; for the B3 Core. This ensures domain decoupling—for instance, a failure in the order database won't affect the identity service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cache Layer (Redis):&lt;/strong&gt; I implemented isolated, lightweight &lt;strong&gt;Redis (Alpine)&lt;/strong&gt; instances for market data and wallet services, guaranteeing sub-millisecond latency where speed is critical.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;NoSQL (MongoDB):&lt;/strong&gt; A centralized instance for storing unstructured data, such as price history (Ticks) and reports, allowing for high schema flexibility.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📡 Messaging: The System Backbone
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;docker-compose.yml&lt;/code&gt; file orchestrates two messaging giants for different purposes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Apache Kafka (KRaft):&lt;/strong&gt; Configured in the modern &lt;strong&gt;KRaft&lt;/strong&gt; mode (eliminating the need for Zookeeper), which makes the container lightweight and stable for local development. It handles the high volume of internal events.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;RabbitMQ:&lt;/strong&gt; Acts as our communication bridge between the Broker and the B3 simulator, ensuring complete decoupling between the systems.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  📊 Observability from "Day 1"
&lt;/h2&gt;

&lt;p&gt;You can't manage what you don't measure. That’s why I included &lt;strong&gt;Prometheus&lt;/strong&gt; and &lt;strong&gt;Grafana&lt;/strong&gt; directly into the core infrastructure. As soon as a microservice is deployed, it can immediately export metrics to be viewed in real-time dashboards.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔐 Security and Configuration with .env
&lt;/h2&gt;

&lt;p&gt;A crucial detail for any professional project is secrets management and configuration. In my &lt;code&gt;docker-compose.yml&lt;/code&gt;, I used &lt;strong&gt;environment variables&lt;/strong&gt; to avoid hardcoding passwords and ports.&lt;/p&gt;

&lt;p&gt;All sensitive configurations are isolated in a &lt;code&gt;.env&lt;/code&gt; file (ignored by Git), while an &lt;code&gt;.env.example&lt;/code&gt; file is provided as a template. This demonstrates a security best practice by ensuring that credentials are never exposed in the source code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Excerpt from docker-compose using environment variables.&lt;/span&gt;
&lt;span class="na"&gt;broker-identity-db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql:8.0&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;MYSQL_DATABASE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${BROKER_IDENTITY_DB_NAME}&lt;/span&gt;
    &lt;span class="na"&gt;MYSQL_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${BROKER_DB_USER}&lt;/span&gt;
    &lt;span class="na"&gt;MYSQL_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${BROKER_DB_PASS}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🚀 How to Run
&lt;/h2&gt;

&lt;p&gt;With the infrastructure automated, a single command is enough to spin up all 14 containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4psbc6wznalx13e7zty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4psbc6wznalx13e7zty.png" alt=" " width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This standardization ensures that the development environment is identical for anyone collaborating on the project, eliminating the classic "it works on my machine" problem.&lt;/p&gt;

&lt;p&gt;In the next post, we’ll talk about the first microservice I created. This service integrates with the external Brappi API, stores data in MongoDB, and publishes real-time price updates to a Kafka topic.&lt;/p&gt;




&lt;h1&gt;
  
  
  🔎 About the series
&lt;/h1&gt;

&lt;p&gt;⬅️ Previous Post: &lt;a href="https://dev.to/rvneto/building-a-microservices-ecosystem-stock-brokerage-simulator-my-broker-b3-45bj"&gt;Building a Microservices Ecosystem&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;📘​ Series Index: &lt;a href="https://dev.to/rvneto/series-roadmap-building-a-stock-brokerage-simulator-with-microservices-kgh"&gt;Series Guide&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/rvneto/trading-docker-infra" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mybrokerb3</category>
      <category>docker</category>
      <category>infrastructure</category>
      <category>containers</category>
    </item>
    <item>
      <title>Infraestrutura como Código: Subindo o ecossistema financeiro com Docker Compose</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Sat, 14 Mar 2026 12:58:28 +0000</pubDate>
      <link>https://dev.to/rvneto/infraestrutura-como-codigo-subindo-o-ecossistema-financeiro-com-docker-compose-ojb</link>
      <guid>https://dev.to/rvneto/infraestrutura-como-codigo-subindo-o-ecossistema-financeiro-com-docker-compose-ojb</guid>
      <description>&lt;p&gt;Olá, pessoal! &lt;/p&gt;

&lt;p&gt;No &lt;a href="https://dev.to/rvneto/construindo-um-ecossistema-de-microservicos-simulador-de-corretora-de-valores-my-broker-b3-1g4n"&gt;post anterior&lt;/a&gt;, apresentei a visão geral do &lt;strong&gt;My Broker B3&lt;/strong&gt;. Hoje, vamos "abrir o capô" e falar sobre a fundação que sustenta todo esse ecossistema: a &lt;strong&gt;infraestrutura&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Para um sistema de microserviços, configurar manualmente cada banco de dados, broker de mensagens e ferramenta de monitoramento seria um pesadelo de produtividade. Por isso, utilizei o &lt;strong&gt;Docker Compose&lt;/strong&gt; para criar um ambiente local que replica fielmente as necessidades de um sistema distribuído de alta complexidade.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏗️ A Estratégia de Isolamento de Domínios
&lt;/h2&gt;

&lt;p&gt;Uma decisão central no design deste projeto foi o &lt;strong&gt;isolamento de dados por domínio&lt;/strong&gt;. Em vez de um único banco de dados monolítico, cada microserviço possui sua própria instância ou base lógica:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Persistência Relacional (SQL):&lt;/strong&gt; Utilizo &lt;strong&gt;MySQL 8.0&lt;/strong&gt; para os domínios da Corretora (&lt;code&gt;Identity&lt;/code&gt;, &lt;code&gt;Wallet&lt;/code&gt;, &lt;code&gt;Order&lt;/code&gt; e &lt;code&gt;Asset&lt;/code&gt;) e &lt;strong&gt;PostgreSQL 15&lt;/strong&gt; para o núcleo da B3. Isso garante que uma falha no banco de ordens não afete o serviço de identidade, por exemplo.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Camada de Cache (Redis):&lt;/strong&gt; Implementei instâncias leves de &lt;strong&gt;Redis (Alpine)&lt;/strong&gt; isoladas para dados de mercado e carteira, garantindo latência sub-milissegundo onde a velocidade é crítica.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NoSQL (MongoDB):&lt;/strong&gt; Uma instância centralizada para armazenar dados não estruturados, como históricos de preços (Ticks) e relatórios.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📡 Mensageria: O Backbone do Sistema
&lt;/h2&gt;

&lt;p&gt;O arquivo &lt;code&gt;docker-compose.yml&lt;/code&gt; orquestra dois gigantes da mensageria para diferentes propósitos:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Apache Kafka (KRaft):&lt;/strong&gt; Configurado no modo moderno &lt;strong&gt;KRaft&lt;/strong&gt; (sem Zookeeper), o que torna o container mais leve e estável para o desenvolvimento local. Ele lida com o alto volume de eventos internos.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;RabbitMQ:&lt;/strong&gt; Atua como a nossa ponte (Bridge) de comunicação entre o Broker e a B3, garantindo o desacoplamento total entre os sistemas.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  📊 Observabilidade desde o "Dia 1"
&lt;/h2&gt;

&lt;p&gt;Não se gerencia o que não se mede. Por isso, incluí &lt;strong&gt;Prometheus&lt;/strong&gt; e &lt;strong&gt;Grafana&lt;/strong&gt; diretamente na infraestrutura base. Assim que um microserviço sobe, ele já pode exportar métricas que são visualizadas em dashboards em tempo real.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔐 Segurança e Configuração com .env
&lt;/h2&gt;

&lt;p&gt;Um detalhe crucial para qualquer projeto profissional é a gestão de segredos e configurações. No &lt;code&gt;docker-compose.yml&lt;/code&gt;, utilizei &lt;strong&gt;variáveis de ambiente&lt;/strong&gt; para evitar o &lt;em&gt;hardcoding&lt;/em&gt; de senhas e portas.&lt;/p&gt;

&lt;p&gt;Toda a configuração sensível fica isolada em um arquivo &lt;code&gt;.env&lt;/code&gt; (ignorado pelo Git), enquanto um arquivo &lt;code&gt;.env.example&lt;/code&gt; é fornecido como template. Isso demonstra uma boa prática de segurança:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Trecho do docker-compose utilizando variáveis de ambiente&lt;/span&gt;
&lt;span class="na"&gt;broker-identity-db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mysql:8.0&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;MYSQL_DATABASE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${BROKER_IDENTITY_DB_NAME}&lt;/span&gt;
    &lt;span class="na"&gt;MYSQL_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${BROKER_DB_USER}&lt;/span&gt;
    &lt;span class="na"&gt;MYSQL_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${BROKER_DB_PASS}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🚀 Como Executar
&lt;/h2&gt;

&lt;p&gt;Com a infraestrutura automatizada, basta um comando para subir os 12 containers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foulq6gvls1iotgvf28tv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foulq6gvls1iotgvf28tv.png" alt=" " width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Essa padronização garante que o ambiente de desenvolvimento seja idêntico para qualquer pessoa que colaboresse no projeto se fosse o caso, eliminando o clássico problema do "na minha máquina funciona".&lt;/p&gt;

&lt;p&gt;No próximo post, vamos falar sobre o primeiro microserviço que criei que faz a integração com API externa da Brappi, armazena em um banco mongo e publica em um tópico Kafka os preços atualizados!&lt;/p&gt;




&lt;h1&gt;
  
  
  ​🔎​ Sobre a série
&lt;/h1&gt;

&lt;p&gt;⬅️ Post Anterior: &lt;a href="https://dev.to/rvneto/construindo-um-ecossistema-de-microservicos-simulador-de-corretora-de-valores-my-broker-b3-1g4n"&gt;Construindo um ecossistema de Microserviços&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;📘​ Índice da Série: &lt;a href="https://dev.to/rvneto/guia-da-serie-construindo-um-simulador-de-corretora-com-microservicos-1kef"&gt;Guia da Série&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/rvneto/trading-docker-infra" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mybrokerb3</category>
      <category>docker</category>
      <category>infrastructure</category>
      <category>containers</category>
    </item>
    <item>
      <title>Building a Microservices Ecosystem: Stock Brokerage Simulator (My Broker B3)</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Sat, 14 Mar 2026 01:46:15 +0000</pubDate>
      <link>https://dev.to/rvneto/building-a-microservices-ecosystem-stock-brokerage-simulator-my-broker-b3-45bj</link>
      <guid>https://dev.to/rvneto/building-a-microservices-ecosystem-stock-brokerage-simulator-my-broker-b3-45bj</guid>
      <description>&lt;p&gt;Hello, everyone!&lt;/p&gt;

&lt;p&gt;I’m starting a series of articles to document the development of &lt;strong&gt;My Broker B3&lt;/strong&gt;. This is a personal project where I’m applying advanced software engineering concepts, distributed systems, and messaging to simulate the real-world operations of a stock brokerage.&lt;/p&gt;

&lt;p&gt;The main objective is to create an ecosystem that handles challenges such as &lt;strong&gt;data consistency&lt;/strong&gt;, &lt;strong&gt;low latency&lt;/strong&gt;, and &lt;strong&gt;asynchronous communication&lt;/strong&gt;, all while integrating a simplified &lt;strong&gt;matching engine&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏗️ System Architecture
&lt;/h2&gt;

&lt;p&gt;The project was designed following a microservices approach, using a hybrid stack to leverage the best of each ecosystem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Core Backend (Java/Spring Boot 3):&lt;/strong&gt; Responsible for the order (&lt;code&gt;broker-order-api&lt;/code&gt;), wallet (&lt;code&gt;broker-wallet-api&lt;/code&gt;), and asset management (&lt;code&gt;broker-asset-api&lt;/code&gt;) APIs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Market Data (Python):&lt;/strong&gt; An integrator (&lt;code&gt;broker-market-data-api&lt;/code&gt;) that manages market data ingestion via scheduled tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Matching Engine (Java):&lt;/strong&gt; A B3 simulator (&lt;code&gt;b3-matching-engine-api&lt;/code&gt;) that processes the execution of orders sent by the brokerage.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ⚙️ Data Flow and Technologies
&lt;/h2&gt;

&lt;p&gt;To ensure resilience and scalability, I adopted a hybrid communication strategy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Synchronous (REST):&lt;/strong&gt; Used for critical real-time validations, such as verifying the wallet balance before allowing an order to be sent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Asynchronous (Event-Driven):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Apache Kafka:&lt;/strong&gt; Acts as an internal event bus for distributing market quotes and asset-related events.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RabbitMQ:&lt;/strong&gt; Manages the communication between the Broker and the B3 Simulator through dedicated queues.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Persistence and Caching
&lt;/h3&gt;

&lt;p&gt;Each service utilizes the data strategy that best suits its specific purpose:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MySQL / PostgreSQL&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Transactional data, orders, and wallet history.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MongoDB&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Market quotes history (Time-series data) within the Market Data API.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Redis&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;"Hot" cache for market prices to ensure ultra-high-speed queries.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Technical draw
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06fy2ed86ez0kdkhdiqo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06fy2ed86ez0kdkhdiqo.png" alt=" " width="800" height="546"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🚀 What’s Next?
&lt;/h2&gt;

&lt;p&gt;This post is just the &lt;strong&gt;kickoff&lt;/strong&gt;. In the upcoming articles, I plan to detail:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure:&lt;/strong&gt; How to deploy all these resources on &lt;strong&gt;AWS (Free Tier)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Messaging:&lt;/strong&gt; A deep dive into &lt;strong&gt;Kafka&lt;/strong&gt; and &lt;strong&gt;RabbitMQ&lt;/strong&gt; configurations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical Challenges:&lt;/strong&gt; How I’m handling &lt;strong&gt;eventual consistency&lt;/strong&gt; and the matching engine's processing logic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Feel free to leave your feedback or questions in the comments!&lt;/p&gt;




&lt;h1&gt;
  
  
  🔎 About the series
&lt;/h1&gt;

&lt;p&gt;📘​ Series Index: &lt;a href="https://dev.to/rvneto/series-roadmap-building-a-stock-brokerage-simulator-with-microservices-kgh"&gt;Series Guide&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>architecture</category>
      <category>distributedsystems</category>
      <category>java</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Construindo um Ecossistema de Microserviços: Simulador de Corretora de Valores (My Broker B3)</title>
      <dc:creator>Roberto de Vargas Neto</dc:creator>
      <pubDate>Sat, 14 Mar 2026 00:46:37 +0000</pubDate>
      <link>https://dev.to/rvneto/construindo-um-ecossistema-de-microservicos-simulador-de-corretora-de-valores-my-broker-b3-1g4n</link>
      <guid>https://dev.to/rvneto/construindo-um-ecossistema-de-microservicos-simulador-de-corretora-de-valores-my-broker-b3-1g4n</guid>
      <description>&lt;p&gt;Olá, pessoal!&lt;/p&gt;

&lt;p&gt;Estou iniciando uma série de artigos para documentar o desenvolvimento do &lt;strong&gt;My Broker B3&lt;/strong&gt;. Este é um projeto pessoal onde estou aplicando conceitos avançados de engenharia de software, sistemas distribuídos e mensageria para simular o funcionamento real de uma corretora de valores.&lt;/p&gt;

&lt;p&gt;O objetivo principal é criar um ecossistema que lide com desafios de &lt;strong&gt;consistência de dados&lt;/strong&gt;, &lt;strong&gt;baixa latência&lt;/strong&gt; e &lt;strong&gt;comunicação assíncrona&lt;/strong&gt;, integrando um motor de negociação (Matching Engine) simplificado.&lt;/p&gt;




&lt;h1&gt;
  
  
  🏗️ A Arquitetura do Sistema
&lt;/h1&gt;

&lt;p&gt;O projeto foi desenhado seguindo a filosofia de microserviços, utilizando uma stack híbrida para aproveitar o melhor de cada ecossistema:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Core Backend (Java/Spring Boot 3):&lt;/strong&gt; Responsável pelas APIs de ordens (&lt;code&gt;broker-order-api&lt;/code&gt;), carteira (&lt;code&gt;broker-wallet-api&lt;/code&gt;) e gestão de ativos (&lt;code&gt;broker-asset-api&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Market Data (Python):&lt;/strong&gt; Um integrador (&lt;code&gt;broker-market-data-api&lt;/code&gt;) que gerencia a ingestão de dados de mercado via agendamento.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Matching Engine (Java):&lt;/strong&gt; Um simulador da B3 (&lt;code&gt;b3-matching-engine-api&lt;/code&gt;) que processa a execução das ordens enviadas pela corretora.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  ⚙️ Fluxo de Dados e Tecnologias
&lt;/h1&gt;

&lt;p&gt;Para garantir resiliência e escalabilidade, adotei uma estratégia de comunicação híbrida:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Síncrona (REST):&lt;/strong&gt; Utilizada para validações críticas em tempo real, como verificar o saldo da carteira antes de permitir o envio de uma ordem.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Assíncrona (Event-Driven):&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Apache Kafka:&lt;/strong&gt; Atua como barramento de eventos interno para distribuição de cotações e eventos de ativos.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RabbitMQ:&lt;/strong&gt; Gerencia a comunicação entre a Corretora e o Simulador da B3 através de filas dedicadas.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Persistência e Cache
&lt;/h2&gt;

&lt;p&gt;Cada serviço utiliza a estratégia de dados que melhor se adapta à sua função:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tecnologia&lt;/th&gt;
&lt;th&gt;Caso de Uso&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MySQL / PostgreSQL&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Dados transacionais, ordens e histórico de carteira.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MongoDB&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Histórico de cotações (Time-series data) na API de Market Data.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Redis&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Cache "quente" de preços de mercado para consultas de altíssima velocidade.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h1&gt;
  
  
  Desenho técnico
&lt;/h1&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4yrajh6gk4crmwx9pi6r.png" alt=" " width="800" height="546"&gt;
&lt;/h2&gt;

&lt;h1&gt;
  
  
  🚀 O que vem por aí?
&lt;/h1&gt;

&lt;p&gt;Este post é apenas o "pontapé inicial". Nos próximos artigos, pretendo detalhar:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Infraestrutura:&lt;/strong&gt; Como subir todos esses recursos na &lt;strong&gt;AWS (Free Tier)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Mensageria:&lt;/strong&gt; Deep dive na configuração do Kafka e RabbitMQ.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Desafios Técnicos:&lt;/strong&gt; Como lidar com a consistência eventual e o processamento do motor de matching.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Fiquem à vontade para deixar feedbacks ou dúvidas nos comentários!&lt;/p&gt;




&lt;h1&gt;
  
  
  ​🔎​ Sobre a série
&lt;/h1&gt;

&lt;p&gt;📘​ Índice da Série: &lt;a href="https://dev.to/rvneto/guia-da-serie-construindo-um-simulador-de-corretora-com-microservicos-1kef"&gt;Guia da Série&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.linkedin.com/in/roberto-de-vargas/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>java</category>
      <category>springboot</category>
      <category>architecture</category>
      <category>microservices</category>
    </item>
  </channel>
</rss>
