<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alex Bevilacqua</title>
    <description>The latest articles on DEV Community by Alex Bevilacqua (@alexbevi).</description>
    <link>https://dev.to/alexbevi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alexbevi"/>
    <language>en</language>
    <item>
      <title>Start With Context: Building the Retrieval Core for Agentic Apps</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Wed, 29 Apr 2026 13:47:44 +0000</pubDate>
      <link>https://dev.to/alexbevi/start-with-context-building-the-retrieval-core-for-agentic-apps-2j5g</link>
      <guid>https://dev.to/alexbevi/start-with-context-building-the-retrieval-core-for-agentic-apps-2j5g</guid>
      <description>&lt;p&gt;&lt;em&gt;Before you add planners, crews, or graph-shaped orchestration, build the part that decides what the model should actually see. In this first post, we’ll start an enterprise support copilot and give it the one capability every future agent depends on: retrieval that doesn’t fall apart in production.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In a recent post I made the case that MongoDB can serve as &lt;a href="https://alexbevi.com/blog/2026/04/15/mongodb-as-the-brain-of-modern-ai-applications/" rel="noopener noreferrer"&gt;the "brain" of a modern AI application&lt;/a&gt; by combining durable state, retrieval, and application data in one place. That framing still holds, but brains are only useful if they can recall the right thing at the right time. I wanted to dig into agentic application development in more detail in a series of posts, so for the first real entry in this series, I want to start one layer below "agents" and one layer above raw storage: the context layer. &lt;/p&gt;

&lt;p&gt;That might sound slightly less glamorous than "multi-agent orchestration," which is exactly why it matters. Most enterprise AI systems do not fail because they lack a clever planner. They fail because the model sees the wrong document, too much irrelevant text, or none of the operational data that actually matters.&lt;/p&gt;

&lt;p&gt;To make this concrete, the application thread for this series will be an &lt;strong&gt;enterprise support escalation copilot&lt;/strong&gt; for a B2B SaaS team. By the end of the series, it should be able to answer questions about incidents, remember previous escalations, pull account context, and coordinate specialized agents when needed. Today, though, we’re giving it its first useful skill: finding the right context for the job.&lt;/p&gt;

&lt;p&gt;Think about the kind of question a real support engineer asks:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Acme’s enterprise tenant started seeing &lt;code&gt;INV-4421&lt;/code&gt; after upgrading to &lt;code&gt;3.8&lt;/code&gt;. Did we see this before, is there a known workaround, and does it affect EU clusters only?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is not a pure semantic search problem. It is part natural language, part exact identifier lookup, part metadata filtering, and part ranking problem. Error codes matter. Version numbers matter. Tenant boundaries matter. Timing matters. That’s why this is such a good place to start - and to solve this problem we'll dig in with &lt;a href="https://www.mongodb.com/products/platform/atlas-database" rel="noopener noreferrer"&gt;MongoDB&lt;/a&gt; and &lt;a href="https://www.mongodb.com/docs/voyageai/" rel="noopener noreferrer"&gt;Voyage AI&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-overview" rel="noopener noreferrer"&gt;MongoDB Vector Search&lt;/a&gt; is built to search vector data alongside the rest of your operational data, supports filtering on other fields in the collection, and can be combined with full-text search for hybrid retrieval. MongoDB’s hybrid search documentation explicitly describes combining semantic and full-text search results with Reciprocal Rank Fusion, which is exactly what you want when a query mixes fuzzy intent with exact strings like issue IDs, SKUs, or feature flags.&lt;/p&gt;

&lt;p&gt;On the retrieval-model side, Voyage provides high-accuracy embedding and reranking models, including newer capabilities like contextualized chunk embeddings, multimodal embeddings, and rerankers designed to refine the top candidate set after initial retrieval. MongoDB Atlas now also exposes Voyage models through its Embedding and Reranking API, currently in preview, which means you can either call Voyage models directly or keep retrieval models, vector search, and operational data closer together under Atlas.&lt;/p&gt;

&lt;p&gt;So what does the retrieval core for our support copilot actually do?&lt;/p&gt;

&lt;p&gt;First, it stores source material in MongoDB: runbooks, release notes, KB articles, previous incident reviews, ticket summaries, and whatever structured account data the support flow needs. Then it chunks the long-form content, embeds it with Voyage, and stores the vectors with the source text and metadata. At query time, it narrows scope using metadata like tenant, product, region, or severity; retrieves candidates semantically and lexically; reranks the best matches; and only then hands a compact, relevant context window to the LLM. In other words: don’t ask the model to be psychic when the database can be specific. &lt;/p&gt;

&lt;p&gt;There are a lot of AI frameworks right now, and they absolutely do not all feel the same. But this is the first important pattern in the series: &lt;strong&gt;the framework should shape the developer experience, not force you to redesign the data layer every six months&lt;/strong&gt;. The retrieval architecture is the stable part. MongoDB and Voyage AI are the stable parts. LangChain, LlamaIndex, Haystack, LangGraph, CrewAI, or whatever comes next should be able to sit on top of that foundation. &lt;/p&gt;

&lt;h2&gt;
  
  
  A framework-agnostic mental model
&lt;/h2&gt;

&lt;p&gt;Before jumping into code, here is the mental model I’d keep fixed no matter which framework you prefer:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Put source documents and operational records in MongoDB.&lt;/li&gt;
&lt;li&gt;Generate embeddings with Voyage.&lt;/li&gt;
&lt;li&gt;Index vector fields and filter fields in MongoDB.&lt;/li&gt;
&lt;li&gt;Use semantic retrieval for meaning.&lt;/li&gt;
&lt;li&gt;Use full-text retrieval for exact strings.&lt;/li&gt;
&lt;li&gt;Rerank the candidate set before generation.&lt;/li&gt;
&lt;li&gt;Return only the context the model actually needs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That shape maps cleanly to both MongoDB Vector Search and Voyage’s model stack. MongoDB handles vector indexes, full-text search, filterable metadata, and live application data; Voyage handles embeddings and reranking; the framework becomes the control surface. &lt;/p&gt;

&lt;h2&gt;
  
  
  Approach 1: LangChain for the shortest path from data to grounded answers
&lt;/h2&gt;

&lt;p&gt;If the goal is to get a retrieval-backed application running quickly, LangChain remains a very practical starting point. &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/" rel="noopener noreferrer"&gt;MongoDB’s LangChain integration&lt;/a&gt; supports vector search, full-text search, and a hybrid retriever that combines both with Reciprocal Rank Fusion. It also supports pre-filtering with MQL expressions, which matters immediately for tenant scoping and product boundaries.&lt;/p&gt;

&lt;p&gt;An illustrative version for our support copilot looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_voyageai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;VoyageAIEmbeddings&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_mongodb.vectorstores&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoDBAtlasVectorSearch&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_mongodb.retrievers.hybrid_search&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoDBAtlasHybridSearchRetriever&lt;/span&gt;

&lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;VoyageAIEmbeddings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;VOYAGE_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;voyage-4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;vector_store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MongoDBAtlasVectorSearch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_connection_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;connection_string&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;support.context&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;index_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;support_vector_index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoDBAtlasHybridSearchRetriever&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;vectorstore&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vector_store&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;search_index_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;support_search_index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;fulltext_penalty&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;60.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;vector_penalty&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;60.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;docs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;retriever&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Acme tenant seeing INV-4421 after upgrading to 3.8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In a production version, I’d pair that with metadata filters on fields like &lt;code&gt;tenant_id&lt;/code&gt;, &lt;code&gt;product&lt;/code&gt;, &lt;code&gt;region&lt;/code&gt;, and &lt;code&gt;severity&lt;/code&gt;, then pass the top candidates through a Voyage reranker before generation. The point is not that LangChain is magical. The point is that the MongoDB + Voyage retrieval story already fits the way LangChain applications are commonly assembled. &lt;/p&gt;

&lt;h2&gt;
  
  
  Approach 2: LlamaIndex when the center of gravity is the data itself
&lt;/h2&gt;

&lt;p&gt;If LangChain often feels application-first, LlamaIndex tends to feel data-first. That makes it a very natural fit when you want to spend more time shaping ingestion, chunking, metadata, and query behavior.&lt;/p&gt;

&lt;p&gt;Using &lt;a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/llamaindex/" rel="noopener noreferrer"&gt;MongoDB’s LlamaIndex integration&lt;/a&gt; we can use &lt;code&gt;VoyageEmbedding&lt;/code&gt; alongside &lt;code&gt;MongoDBAtlasVectorSearch&lt;/code&gt; to make metadata filters very explicit, which is helpful for real enterprise retrieval where "give me the right answer" usually means "give me the right answer for &lt;em&gt;this tenant&lt;/em&gt;, &lt;em&gt;this region&lt;/em&gt;, and &lt;em&gt;this product line&lt;/em&gt;." &lt;/p&gt;

&lt;p&gt;The shape is roughly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pymongo&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoClient&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;llama_index.embeddings.voyageai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;VoyageEmbedding&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;llama_index.vector_stores.mongodb&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoDBAtlasVectorSearch&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;llama_index.core&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StorageContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;VectorStoreIndex&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;llama_index.core.retrievers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;VectorIndexRetriever&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;llama_index.core.vector_stores&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MetadataFilters&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ExactMatchFilter&lt;/span&gt;

&lt;span class="n"&gt;embed_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;VoyageEmbedding&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;voyage_api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;VOYAGE_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;voyage-4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;mongo_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;vector_store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoDBAtlasVectorSearch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;mongo_client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;db_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;support&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;collection_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;context&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;vector_index_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;support_vector_index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;storage_context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;StorageContext&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_defaults&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;vector_store&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vector_store&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# docs is your loaded support corpus, such as runbooks, incident reviews,
# release notes, and ticket summaries.
&lt;/span&gt;&lt;span class="n"&gt;vector_index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;VectorStoreIndex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;storage_context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;storage_context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;embed_model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;embed_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;filters&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MetadataFilters&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;ExactMatchFilter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tenant_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;acme&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;VectorIndexRetriever&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vector_index&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;similarity_top_k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;nodes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;retriever&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;retrieve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Known workaround for INV-4421?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What I like about this path is that it keeps the retrieval pipeline honest. You can see the data model. You can see the filter model. You can see how chunking choices affect what comes back. For article one in a series like this, that clarity is useful because it keeps us focused on context quality before we get distracted by agent loops.&lt;/p&gt;

&lt;h2&gt;
  
  
  Approach 3: Haystack when you want explicit, composable pipelines
&lt;/h2&gt;

&lt;p&gt;Haystack is a nice fit for teams that prefer explicit components over higher-level abstractions. &lt;a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/haystack/" rel="noopener noreferrer"&gt;MongoDB’s Haystack integration&lt;/a&gt; uses a &lt;code&gt;MongoDBAtlasDocumentStore&lt;/code&gt; with MongoDB retrievers, and the official tutorial pairs that with Voyage embedders. Haystack’s MongoDB integration also has separate semantic and full-text retrievers, which is useful when you want to make the retrieval strategy itself a first-class part of the pipeline.&lt;/p&gt;

&lt;p&gt;A trimmed-down version looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;haystack&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Pipeline&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;haystack.utils&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Secret&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;haystack_integrations.components.embedders.voyage_embedders&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;VoyageTextEmbedder&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;haystack_integrations.document_stores.mongodb_atlas&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoDBAtlasDocumentStore&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;haystack_integrations.components.retrievers.mongodb_atlas&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoDBAtlasEmbeddingRetriever&lt;/span&gt;

&lt;span class="n"&gt;document_store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoDBAtlasDocumentStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;mongo_connection_string&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;Secret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_env_var&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;database_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;support&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;collection_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;context&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;vector_search_index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;support_vector_index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;full_text_search_index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;support_search_index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Pipeline&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_component&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;query_embedder&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;VoyageTextEmbedder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;voyage-4&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_component&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;retriever&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nc"&gt;MongoDBAtlasEmbeddingRetriever&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;document_store&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;document_store&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;top_k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;query_embedder.embedding&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;retriever.query_embedding&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;query_embedder&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Known workaround for INV-4421?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is probably the most "pipe and fitting" version of the three, and that is a compliment. For enterprise teams, explicit systems are often easier to debug, evaluate, and explain. And once again, the interesting part is not that the framework is different. It is that the same MongoDB + Voyage retrieval core still fits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why MongoDB
&lt;/h2&gt;

&lt;p&gt;The support copilot does not just need chunks in a vector store. It needs chunks, source documents, tenant metadata, ticket references, account records, release versions, and eventually execution state. MongoDB Vector Search lets you search semantic meaning alongside that operational data, pre-filter the search space using indexed fields, and combine vector and full-text retrieval when exact terms matter. Change streams then give you a way to react to new or updated records in real time, which is exactly what you want when incidents, tickets, or KB articles change during the workday. &lt;/p&gt;

&lt;p&gt;And if you want an even tighter platform story, MongoDB Atlas now exposes Voyage models directly through the &lt;a href="https://www.mongodb.com/docs/voyageai/api-reference/overview/" rel="noopener noreferrer"&gt;Embedding and Reranking API&lt;/a&gt;. That API is database-agnostic, but it pairs especially well with Atlas because it reduces the number of moving pieces needed to stand up a modern retrieval pipeline. Fewer services, fewer credentials, less trying to debug "why is this top result here?". &lt;/p&gt;

&lt;p&gt;This is also where the framework story becomes easier to reason about. LangChain, LlamaIndex, and Haystack all give you different ergonomics. MongoDB stays the system where the data lives. Voyage stays the retrieval layer that improves what gets surfaced. That is a much more durable architecture than betting everything on whichever orchestration framework happens to be loudest this quarter.&lt;/p&gt;

&lt;h2&gt;
  
  
  What next?
&lt;/h2&gt;

&lt;p&gt;Once the retrieval core is solid, adding agents becomes a lot more interesting.&lt;/p&gt;

&lt;p&gt;In the next post, I’d take this same support copilot and add &lt;strong&gt;short-term execution state&lt;/strong&gt; and &lt;strong&gt;long-term memory&lt;/strong&gt;. LangGraph is a natural next step as it &lt;a href="https://docs.langchain.com/oss/python/langgraph/persistence" rel="noopener noreferrer"&gt;separates persistence&lt;/a&gt; into checkpoints for thread state and stores for long-term memory, and MongoDB already has first-class integrations for both the LangGraph checkpointer and the long-term store. That is where the earlier "brain" idea becomes concrete: not just retrieval, but retrieval plus memory plus durable execution. &lt;/p&gt;

&lt;p&gt;The broader trend line is pretty clear, too. Agent frameworks are converging on durable state and memory. Retrieval models are getting richer with &lt;a href="https://docs.voyageai.com/docs/contextualized-chunk-embeddings" rel="noopener noreferrer"&gt;contextualized chunk embeddings&lt;/a&gt;, multimodal embeddings, and better rerankers. MongoDB Atlas is moving retrieval models and database capabilities closer together. The winning application architecture is the one that can absorb those changes without forcing you to rebuild your data layer every few months. MongoDB and Voyage AI fit that direction unusually well. &lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>mongodb</category>
      <category>rag</category>
    </item>
    <item>
      <title>Persistent multi-agent conversations with the OpenAI Agents SDK and MongoDB</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Mon, 27 Apr 2026 16:49:36 +0000</pubDate>
      <link>https://dev.to/alexbevi/persistent-multi-agent-conversations-with-the-openai-agents-sdk-and-mongodb-2n7f</link>
      <guid>https://dev.to/alexbevi/persistent-multi-agent-conversations-with-the-openai-agents-sdk-and-mongodb-2n7f</guid>
      <description>&lt;p&gt;&lt;em&gt;Version 0.14.2 added a &lt;code&gt;MongoDBSession&lt;/code&gt; backend; here's a working multi-agent customer-support demo that uses it, and the documents it leaves behind.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The OpenAI Agents SDK has shipped session backends for SQLite, SQLAlchemy, Redis, and Dapr for a while now. With &lt;strong&gt;0.14.2&lt;/strong&gt; (April 2026), &lt;a href="https://openai.github.io/openai-agents-python/sessions/" rel="noopener noreferrer"&gt;&lt;code&gt;MongoDBSession&lt;/code&gt; joined that list&lt;/a&gt;, and 0.14.6 added the docs page. If you're already running MongoDB for application data, this is the moment to stop standing up a second store just to remember what the agent said three turns ago. The demo for this walkthrough is a small e-commerce support app with three handoff-connected agents and one MongoDB instance behind everything: customers, orders, support articles, &lt;strong&gt;and&lt;/strong&gt; the conversation history. &lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/alexbevi/mongodb-openai-agents-sdk-example" rel="noopener noreferrer"&gt;https://github.com/alexbevi/mongodb-openai-agents-sdk-example&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you'll build
&lt;/h2&gt;

&lt;p&gt;A CLI customer-support agent that identifies the user from MongoDB, hands off between a triage agent, an order-support agent, and a knowledge-base agent, and persists every turn (user message, tool call, tool output, assistant reply, handoff) to MongoDB via &lt;code&gt;MongoDBSession&lt;/code&gt;. You quit, restart the process, log in with the same email, and the agent picks up the thread — no re-explaining the return you started yesterday.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why MongoDB for sessions
&lt;/h2&gt;

&lt;p&gt;A session backend has three jobs: store one item per turn, return them in order on the next run, and not corrupt itself when two processes write at once. The interesting part for MongoDB is how naturally each of those maps to things the database already does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Items in a session are heterogeneous.&lt;/strong&gt; A turn can be a user message, a tool call, a tool result, an assistant message, or a handoff record — each with its own shape. A document store takes those payloads as-is. There's no &lt;code&gt;messages&lt;/code&gt; table you have to migrate every time the SDK adds a new run-item type, and no JSON column to parse around.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ordering is the hard part, and &lt;code&gt;$inc&lt;/code&gt; is built for it.&lt;/strong&gt; &lt;code&gt;MongoDBSession&lt;/code&gt; stamps each message with a monotonically increasing &lt;code&gt;seq&lt;/code&gt; counter — the SDK docs call this out explicitly: it preserves ordering across concurrent writers and processes. That's a single-document atomic increment, not a distributed lock or an optimistic-retry loop. Two FastAPI workers handling the same &lt;code&gt;session_id&lt;/code&gt; won't interleave.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One store, one connection pool.&lt;/strong&gt; This is the angle the demo actually showcases. The &lt;code&gt;ecommerce_support&lt;/code&gt; database holds &lt;code&gt;customers&lt;/code&gt;, &lt;code&gt;orders&lt;/code&gt;, and &lt;code&gt;support_articles&lt;/code&gt; &lt;em&gt;next to&lt;/em&gt; &lt;code&gt;agent_sessions&lt;/code&gt; and &lt;code&gt;agent_messages&lt;/code&gt;. Tools query operational data, the SDK persists turns, and they share the same &lt;code&gt;AsyncMongoClient&lt;/code&gt;. Adding session memory cost zero new infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Walkthrough
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Prerequisites
&lt;/h3&gt;

&lt;p&gt;Python 3.10+, an OpenAI API key, and either a local &lt;code&gt;mongod&lt;/code&gt; or a &lt;a href="https://www.mongodb.com/cloud/atlas/register" rel="noopener noreferrer"&gt;MongoDB Atlas&lt;/a&gt; cluster. Nothing in the demo requires Atlas-only features — a 27017 on localhost is fine.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Install
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;requirements.txt&lt;/code&gt; pins the new extra:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openai-agents[mongodb]&amp;gt;=0.14.2
python-dotenv&amp;gt;=1.0.0
pymongo&amp;gt;=4.13
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;[mongodb]&lt;/code&gt; extra pulls in &lt;code&gt;pymongo&lt;/code&gt;'s async client; the &lt;code&gt;MongoDBSession&lt;/code&gt; class lives at &lt;code&gt;agents.extensions.memory.MongoDBSession&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Connect
&lt;/h3&gt;

&lt;p&gt;The demo uses one shared &lt;code&gt;AsyncMongoClient&lt;/code&gt; per process (the right pattern — sessions don't own the client, they share its pool):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pymongo.asynchronous.mongo_client&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AsyncMongoClient&lt;/span&gt;

&lt;span class="n"&gt;MONGODB_URI&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mongodb://localhost:27017&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;DB_NAME&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ecommerce_support&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;mongo_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AsyncMongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mongo_client&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;DB_NAME&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;mongo_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;admin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ping&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Cannot connect to MongoDB (&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;):&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;  &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;exc&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Seed and identify
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;python seed_data.py&lt;/code&gt; loads three demo customers, five products, five orders with embedded line items, and seven support articles indexed for &lt;code&gt;$text&lt;/code&gt; search. Then &lt;code&gt;main.py&lt;/code&gt; looks the customer up so the triage agent doesn't have to ask for an email it already knows.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Instantiate the session
&lt;/h3&gt;

&lt;p&gt;This is the integration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;session_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;support_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;email&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;@&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;_at_&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoDBSession&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;mongo_client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;database&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;DB_NAME&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ping&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Warning: MongoDB session storage is unavailable.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;existing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_items&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Constructing with &lt;code&gt;client=&lt;/code&gt; (rather than &lt;code&gt;MongoDBSession.from_uri(...)&lt;/code&gt;) means the session shares the app's connection pool and &lt;code&gt;session.close()&lt;/code&gt; becomes a no-op — the lifecycle stays with you. &lt;code&gt;session.ping()&lt;/code&gt; is a real round-trip against MongoDB, useful for liveness probes.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Run
&lt;/h3&gt;

&lt;p&gt;Pass &lt;code&gt;session=&lt;/code&gt; to the runner. Everything else is the same SDK you already know:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Customer Support&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;group_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;conversation_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;Runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;current_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;# MongoDB stores every turn automatically
&lt;/span&gt;    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Have a conversation, &lt;code&gt;quit&lt;/code&gt;, run &lt;code&gt;python main.py&lt;/code&gt; again with the same email, and the next message gets the full prior context prepended automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  What MongoDB actually stored
&lt;/h2&gt;

&lt;p&gt;After a few turns with &lt;code&gt;alice@example.com&lt;/code&gt;, two collections show up in the &lt;code&gt;ecommerce_support&lt;/code&gt; database. The interesting one is &lt;code&gt;agent_messages&lt;/code&gt;. A representative document, abridged:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;ObjectId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;6620d1f4...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="nx"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;support_alice_at_example_com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;// partition key for this conversation&lt;/span&gt;
  &lt;span class="nx"&gt;seq&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;                                        &lt;span class="c1"&gt;// monotonically increasing turn order&lt;/span&gt;
  &lt;span class="nx"&gt;message_data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;                                &lt;span class="c1"&gt;// the SDK's run-item, stored as-is&lt;/span&gt;
    &lt;span class="nl"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;function_call_output&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;call_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;call_8b2...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;output&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Return initiated for order ORD-1001.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;Reason: Not powerful enough...&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;Estimated refund: $1,484.98 (includes 10% Gold loyalty bonus)&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="nx"&gt;created_at&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;ISODate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2026-04-26T19:14:08.221Z&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three fields earn their keep:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;session_id&lt;/code&gt;&lt;/strong&gt; is the only field every read filters on. It's the partition key for "this conversation."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;seq&lt;/code&gt;&lt;/strong&gt; is the integer that makes ordering deterministic. The SDK reads with &lt;code&gt;sort({ seq: 1 })&lt;/code&gt; and writes with an atomic &lt;code&gt;$inc&lt;/code&gt; against the matching &lt;code&gt;agent_sessions&lt;/code&gt; document, which is what makes concurrent workers safe without a distributed lock.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;message_data&lt;/code&gt;&lt;/strong&gt; is the SDK's run-item — a user message, tool call, tool output, assistant message, or handoff. Different shape every time. The document model just stores it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;agent_sessions&lt;/code&gt; holds one document per &lt;code&gt;session_id&lt;/code&gt; with the current high-water &lt;code&gt;seq&lt;/code&gt; and timestamps — that's the counter &lt;code&gt;$inc&lt;/code&gt; operates on.&lt;/p&gt;

&lt;p&gt;The SDK creates its indexes on first use (per the &lt;a href="https://openai.github.io/openai-agents-python/sessions/" rel="noopener noreferrer"&gt;sessions docs&lt;/a&gt;). You'll see a compound index on &lt;code&gt;(session_id, seq)&lt;/code&gt; on &lt;code&gt;agent_messages&lt;/code&gt; (the only access pattern the SDK has — fetch ordered history for one session) and a unique index on &lt;code&gt;session_id&lt;/code&gt; in &lt;code&gt;agent_sessions&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Production notes
&lt;/h2&gt;

&lt;p&gt;For Atlas, swap the URI for &lt;code&gt;mongodb+srv://...&lt;/code&gt; — &lt;code&gt;MongoDBSession&lt;/code&gt; accepts it without any other change. If abandoned conversations accumulate, add a TTL index on &lt;code&gt;agent_messages.created_at&lt;/code&gt; and old turns retire on their own.&lt;/p&gt;

&lt;p&gt;Connection lifetime matters: keep one &lt;code&gt;AsyncMongoClient&lt;/code&gt; per process, construct &lt;code&gt;MongoDBSession(client=...)&lt;/code&gt; per request, and let the Runner do the rest. Don't reach for &lt;code&gt;MongoDBSession.from_uri(...)&lt;/code&gt; in a web handler — it builds and tears down a client every call. The session needs read/write on the two configured collections (defaults &lt;code&gt;agent_sessions&lt;/code&gt; and &lt;code&gt;agent_messages&lt;/code&gt;, both overridable via &lt;code&gt;sessions_collection=&lt;/code&gt; and &lt;code&gt;messages_collection=&lt;/code&gt;). The &lt;code&gt;seq&lt;/code&gt; counter keeps concurrent writers safe, but fanning the same &lt;code&gt;session_id&lt;/code&gt; across processes will interleave their turns — safe, but probably not what the user meant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/alexbevi/mongodb-openai-agents-sdk-example
&lt;span class="nb"&gt;cd &lt;/span&gt;mongodb-openai-agents-sdk-example
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;span class="nb"&gt;cp &lt;/span&gt;env.example .env          &lt;span class="c"&gt;# set OPENAI_API_KEY and MONGODB_URI&lt;/span&gt;
python seed_data.py
python main.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Required env vars: &lt;code&gt;OPENAI_API_KEY&lt;/code&gt;, &lt;code&gt;MONGODB_URI&lt;/code&gt; (defaults to &lt;code&gt;mongodb://localhost:27017&lt;/code&gt;). Demo accounts: &lt;code&gt;alice@example.com&lt;/code&gt; (Gold), &lt;code&gt;bob@example.com&lt;/code&gt; (Standard), &lt;code&gt;carol@example.com&lt;/code&gt; (Platinum).&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to go next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The full session API surface — &lt;code&gt;get_items&lt;/code&gt;, &lt;code&gt;add_items&lt;/code&gt;, &lt;code&gt;pop_item&lt;/code&gt;, &lt;code&gt;clear_session&lt;/code&gt;, &lt;code&gt;ping&lt;/code&gt; — is documented in the &lt;a href="https://openai.github.io/openai-agents-python/sessions/" rel="noopener noreferrer"&gt;Sessions overview&lt;/a&gt;, including the MongoDB-specific notes on collection naming and Atlas URIs.&lt;/li&gt;
&lt;li&gt;Wrap your &lt;code&gt;MongoDBSession&lt;/code&gt; in &lt;a href="https://openai.github.io/openai-agents-python/sessions/" rel="noopener noreferrer"&gt;&lt;code&gt;OpenAIResponsesCompactionSession&lt;/code&gt;&lt;/a&gt; once threads grow long; it summarizes old turns server-side and rewrites the underlying session.&lt;/li&gt;
&lt;li&gt;The natural next MongoDB feature for this demo is &lt;a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/" rel="noopener noreferrer"&gt;Atlas Vector Search&lt;/a&gt; — store embeddings on &lt;code&gt;support_articles&lt;/code&gt; and replace the &lt;code&gt;$text&lt;/code&gt; query in &lt;code&gt;search_knowledge_base&lt;/code&gt; with &lt;code&gt;$vectorSearch&lt;/code&gt;. Same database, same client, one new index.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>mongodb</category>
      <category>python</category>
    </item>
    <item>
      <title>MongoDB as the Brain of Modern AI Applications</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Thu, 16 Apr 2026 12:39:11 +0000</pubDate>
      <link>https://dev.to/alexbevi/mongodb-as-the-brain-of-modern-ai-applications-2552</link>
      <guid>https://dev.to/alexbevi/mongodb-as-the-brain-of-modern-ai-applications-2552</guid>
      <description>&lt;p&gt;Production agents need two persistence layers: thread-scoped state and cross-session memory. &lt;a href="https://adk.dev/sessions/session/" rel="noopener noreferrer"&gt;Google ADK Sessions&lt;/a&gt; stores &lt;code&gt;events&lt;/code&gt; and &lt;code&gt;state&lt;/code&gt; for a single conversation, while &lt;a href="https://adk.dev/sessions/memory/" rel="noopener noreferrer"&gt;&lt;code&gt;MemoryService&lt;/code&gt;&lt;/a&gt; handles recall across sessions. &lt;a href="https://docs.langchain.com/oss/python/langgraph/add-memory" rel="noopener noreferrer"&gt;LangGraph memory&lt;/a&gt; makes the same split with a checkpointer for short-term memory and a store for long-term memory, and &lt;a href="https://docs.langchain.com/oss/python/langchain/long-term-memory" rel="noopener noreferrer"&gt;LangChain long-term memory&lt;/a&gt; builds on LangGraph stores that persist JSON documents by namespace and key. The memory architecture has already converged.&lt;/p&gt;

&lt;p&gt;Durable memory is not raw chat replay. &lt;a href="https://docs.cloud.google.com/agent-builder/agent-engine/memory-bank/overview" rel="noopener noreferrer"&gt;Vertex AI Memory Bank&lt;/a&gt; is built for identity-scoped, cross-session personalization and LLM-driven knowledge extraction, and &lt;a href="https://cloud.google.com/blog/topics/developers-practitioners/remember-this-agent-state-and-memory-with-adk" rel="noopener noreferrer"&gt;Google’s ADK memory write-up&lt;/a&gt; describes Memory Bank as extracting key information from session data rather than replaying every turn. &lt;a href="https://docs.langchain.com/oss/python/concepts/memory" rel="noopener noreferrer"&gt;LangChain’s memory model&lt;/a&gt; is equally explicit: long-term memory can be semantic (facts), episodic (past actions), or procedural (rules and prompts). &lt;/p&gt;

&lt;p&gt;Agent memory should be structured data, not opaque blobs. &lt;a href="https://docs.langchain.com/oss/python/langchain/long-term-memory" rel="noopener noreferrer"&gt;LangChain stores&lt;/a&gt; persist long-term memory as JSON documents, and ADK’s &lt;a href="https://adk.dev/sessions/session/migrate/" rel="noopener noreferrer"&gt;&lt;code&gt;DatabaseSessionService&lt;/code&gt; migration&lt;/a&gt; moved session serialization from pickle-based storage to JSON-based storage in v1.22.0. MongoDB’s document model matches that reality directly.&lt;/p&gt;

&lt;p&gt;MongoDB is a strong fit because retrieval lives in the same system as the memory. &lt;a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/" rel="noopener noreferrer"&gt;MongoDB Vector Search&lt;/a&gt; supports both approximate and exact nearest-neighbor search, and the default index type is &lt;a href="https://www.mongodb.com/docs/atlas/atlas-search/field-types/vector-type/" rel="noopener noreferrer"&gt;HNSW&lt;/a&gt;. &lt;a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/" rel="noopener noreferrer"&gt;Vector search pre-filters&lt;/a&gt; let you scope recall by fields like &lt;code&gt;user_id&lt;/code&gt;, &lt;code&gt;tenant_id&lt;/code&gt;, or &lt;code&gt;memory_type&lt;/code&gt; before embeddings are compared. &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/hybrid-search/" rel="noopener noreferrer"&gt;Hybrid search&lt;/a&gt; combines vector and full-text retrieval with reciprocal rank fusion, which is exactly what memory needs when the data mixes natural language with exact identifiers like invoice IDs, feature flags, or product SKUs.&lt;/p&gt;

&lt;p&gt;Memory also needs retention and update policy. &lt;a href="https://www.mongodb.com/docs/manual/core/index-ttl/" rel="noopener noreferrer"&gt;TTL indexes&lt;/a&gt; automatically expire session artifacts, scratchpads, or short-lived summaries. &lt;a href="https://www.mongodb.com/docs/manual/changestreams/" rel="noopener noreferrer"&gt;Change streams&lt;/a&gt; give you a real-time feed of inserts and updates, which is the right trigger for summarization, entity extraction, or memory distillation jobs. When the data is relationship-heavy instead of chunk-heavy, &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/graph-rag/" rel="noopener noreferrer"&gt;GraphRAG on MongoDB&lt;/a&gt; uses entities, edges, and &lt;code&gt;$graphLookup&lt;/code&gt; for relationship-aware, multi-hop retrieval.&lt;/p&gt;

&lt;p&gt;This is not a narrow LangChain story. MongoDB publishes integrations for &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/langgraph/" rel="noopener noreferrer"&gt;LangGraph&lt;/a&gt;, &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/" rel="noopener noreferrer"&gt;LangChain&lt;/a&gt;, &lt;a href="https://developers.llamaindex.ai/python/framework/integrations/vector_stores/mongodbatlasvectorsearch/" rel="noopener noreferrer"&gt;LlamaIndex&lt;/a&gt;, &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/" rel="noopener noreferrer"&gt;Semantic Kernel&lt;/a&gt;, &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/" rel="noopener noreferrer"&gt;Haystack&lt;/a&gt;, &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/" rel="noopener noreferrer"&gt;Spring AI&lt;/a&gt;, &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/" rel="noopener noreferrer"&gt;CrewAI&lt;/a&gt;, and &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/" rel="noopener noreferrer"&gt;Vertex AI&lt;/a&gt;. &lt;a href="https://developers.llamaindex.ai/python/framework/module_guides/storing/docstores/" rel="noopener noreferrer"&gt;LlamaIndex&lt;/a&gt; can use MongoDB for the vector store, document store, and index store. &lt;a href="https://docs.mem0.ai/components/vectordbs/dbs/mongodb" rel="noopener noreferrer"&gt;Mem0&lt;/a&gt; also supports MongoDB as a memory backend. MongoDB fits the storage contract these frameworks keep converging on: structured documents plus semantic, lexical, and graph-based retrieval.&lt;/p&gt;

&lt;h2&gt;
  
  
  LangGraph: short-term checkpoints and long-term memory in one database
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/langgraph/" rel="noopener noreferrer"&gt;MongoDB’s LangGraph integration&lt;/a&gt; exposes &lt;code&gt;MongoDBSaver&lt;/code&gt; for checkpoints and &lt;code&gt;MongoDBStore&lt;/code&gt; for durable memory, with optional vector indexing and TTL-based expiry. That maps directly to LangGraph’s own split between thread persistence and store-backed recall.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pymongo&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoClient&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langgraph.checkpoint.mongodb&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoDBSaver&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langgraph.store.mongodb&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoDBStore&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;create_vector_index_config&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAIEmbeddings&lt;/span&gt;

&lt;span class="n"&gt;MONGODB_URI&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;connection-string&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="c1"&gt;# Short-term memory: thread checkpoints
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;checkpointer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoDBSaver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Long-term memory: semantic store with metadata filters
&lt;/span&gt;&lt;span class="n"&gt;index_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_vector_index_config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;embed&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;OpenAIEmbeddings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text-embedding-3-small&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;dims&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1536&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;fields&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;filters&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Assume `builder` is an existing LangGraph StateGraph
&lt;/span&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;MongoDBStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_conn_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;conn_string&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;db_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent_memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;collection_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memories&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;index_config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;index_config&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ttl_config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;default_ttl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;24&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;   &lt;span class="c1"&gt;# 30 days
&lt;/span&gt;        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;refresh_on_read&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;graph&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;compile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;checkpointer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;checkpointer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user-42&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memories&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pref:vegan:soho&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User prefers vegan restaurants near SoHo.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user-42&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;semantic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user-42&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memories&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Where should I book dinner tonight?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the clean MongoDB story: checkpoints for working memory, a store for long-term memory, vector retrieval for recall, metadata filters for isolation, and TTL for automatic cleanup. The same database handles all of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  LangChain: chat history plus hybrid recall
&lt;/h2&gt;

&lt;p&gt;At the LangChain layer, MongoDB covers both conversation state and retrieval. &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/" rel="noopener noreferrer"&gt;MongoDBChatMessageHistory&lt;/a&gt; persists per-session message history, &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/" rel="noopener noreferrer"&gt;&lt;code&gt;MongoDBAtlasVectorSearch&lt;/code&gt;&lt;/a&gt; stores semantic memories, and &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/hybrid-search/" rel="noopener noreferrer"&gt;&lt;code&gt;MongoDBAtlasHybridSearchRetriever&lt;/code&gt;&lt;/a&gt; fuses lexical and semantic recall.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_core.documents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Document&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_mongodb.chat_message_histories&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoDBChatMessageHistory&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_mongodb.retrievers.hybrid_search&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoDBAtlasHybridSearchRetriever&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_mongodb.vectorstores&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoDBAtlasVectorSearch&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain_openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAIEmbeddings&lt;/span&gt;

&lt;span class="n"&gt;MONGODB_URI&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;connection-string&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;history&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoDBChatMessageHistory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user-42:thread-7&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;connection_string&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;database_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent_memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;collection_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;chat_history&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;history_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;vector_store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;MongoDBAtlasVectorSearch&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_connection_string&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;connection_string&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent_memory.user_memories&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;OpenAIEmbeddings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text-embedding-3-small&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;index_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vector_index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;vector_store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="nc"&gt;Document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;page_content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;User prefers vegan restaurants near SoHo.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user-42&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;semantic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="nc"&gt;Document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;page_content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Invoice 8419 was disputed last month.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user-42&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;episodic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_user_message&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Find dinner options for me in SoHo.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoDBAtlasHybridSearchRetriever&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;vectorstore&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vector_store&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;search_index_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;search_index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;top_k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;fulltext_penalty&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;vector_penalty&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;docs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;retriever&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Find dinner options for a vegan user in SoHo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hybrid retrieval is not optional in a serious memory system. "Prefers vegan restaurants in SoHo" is semantic. "Invoice 8419" is lexical. MongoDB’s hybrid retriever exists because production memory contains both.&lt;/p&gt;

&lt;h2&gt;
  
  
  Google ADK: the Sessions-and-Memory model maps cleanly to MongoDB
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://adk.dev/sessions/session/" rel="noopener noreferrer"&gt;Google ADK&lt;/a&gt; makes the architecture explicit. &lt;a href="https://adk.dev/runtime/event-loop/" rel="noopener noreferrer"&gt;&lt;code&gt;SessionService&lt;/code&gt;&lt;/a&gt; manages session objects, applies &lt;code&gt;state_delta&lt;/code&gt;, and appends event history. &lt;a href="https://adk.dev/runtime/event-loop/" rel="noopener noreferrer"&gt;&lt;code&gt;MemoryService&lt;/code&gt;&lt;/a&gt; manages long-term semantic memory across sessions. The &lt;a href="https://adk.dev/sessions/session/" rel="noopener noreferrer"&gt;Sessions docs&lt;/a&gt; currently list &lt;code&gt;InMemorySessionService&lt;/code&gt;, &lt;code&gt;VertexAiSessionService&lt;/code&gt;, and &lt;code&gt;DatabaseSessionService&lt;/code&gt;, so MongoDB is not a built-in backend today. But ADK exposes &lt;a href="https://adk.dev/api-reference/python/" rel="noopener noreferrer"&gt;base session and memory service abstractions&lt;/a&gt;, which makes MongoDB a natural implementation target rather than a workaround. &lt;a href="https://docs.cloud.google.com/agent-builder/agent-engine/memory-bank/overview" rel="noopener noreferrer"&gt;Memory Bank&lt;/a&gt; then adds the identity-scoped memory semantics on top.&lt;/p&gt;

&lt;p&gt;A MongoDB-backed ADK deployment should separate sessions, events, and distilled memories into dedicated collections. That mirrors ADK’s documented split between mutable session state, append-only event history, and searchable long-term memory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timezone&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pymongo&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ASCENDING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MongoClient&lt;/span&gt;

&lt;span class="n"&gt;MONGODB_URI&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;connection-string&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agent_memory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;sessions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;adk_sessions&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;events&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;adk_events&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;memories&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;adk_memories&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;sessions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;[(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;app_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ASCENDING&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ASCENDING&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ASCENDING&lt;/span&gt;&lt;span class="p"&gt;)],&lt;/span&gt;
    &lt;span class="n"&gt;unique&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_index&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;app_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ASCENDING&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ASCENDING&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ASCENDING&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;timestamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ASCENDING&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;memories&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_index&lt;/span&gt;&lt;span class="p"&gt;([(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ASCENDING&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ASCENDING&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt;
&lt;span class="c1"&gt;# Create a MongoDB Vector Search index on memories.embedding
# Mark user_id and memory_type as filter fields in the index definition.
&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;persist_session_turn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;utc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;sessions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update_one&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;app_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;app_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$set&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;state&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;updated_at&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$setOnInsert&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;created_at&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="n"&gt;upsert&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert_one&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;app_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;app_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;session_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;timestamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;timestamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;event&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;store_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;source_session_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;memories&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert_one&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;semantic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;embedding&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;source_session_ids&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;source_session_id&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;created_at&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;utc&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;search_memories&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query_embedding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;float&lt;/span&gt;&lt;span class="p"&gt;]):&lt;/span&gt;
    &lt;span class="n"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$vectorSearch&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory_vector_index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;embedding&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;queryVector&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;query_embedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;numCandidates&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;limit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;filter&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;semantic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$project&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;memory_type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;score&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;$meta&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vectorSearchScore&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;memories&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;aggregate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a design sketch, not an official ADK adapter. The point is that ADK’s abstractions already describe a storage model MongoDB handles well: mutable session state, append-only events, and searchable long-term memory. Once a &lt;code&gt;MemoryService&lt;/code&gt; exists, ADK’s built-in &lt;a href="https://adk.dev/sessions/memory/" rel="noopener noreferrer"&gt;&lt;code&gt;PreloadMemory&lt;/code&gt; and &lt;code&gt;LoadMemory&lt;/code&gt; tools&lt;/a&gt; can use it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dedicated memory layers also fit: Mem0 on MongoDB
&lt;/h2&gt;

&lt;p&gt;MongoDB is not only useful when memory is native to the agent framework. &lt;a href="https://docs.mem0.ai/components/vectordbs/dbs/mongodb" rel="noopener noreferrer"&gt;Mem0’s MongoDB backend&lt;/a&gt; supports MongoDB directly as a vector database for memory storage and retrieval. That matters because it shows MongoDB works both as the application database and as the substrate beneath a dedicated memory layer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mem0&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Memory&lt;/span&gt;

&lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vector_store&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;provider&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mongodb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;config&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;db_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mem0_db&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;collection_name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mem0_collection&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mongo_uri&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;connection-string&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;messages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;m planning to watch a movie tonight.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;assistant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What genres do you like?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I love sci-fi, not thrillers.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;alice&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;category&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;movies&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The architectural value is the same as in LangGraph and LangChain: persistent memory objects, vector retrieval, and application data can live in one operational system instead of being spread across separate services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why MongoDB is the best choice here
&lt;/h2&gt;

&lt;p&gt;MongoDB is the best choice when you want one system to hold agent state, long-term memory, retrieval data, and the application records the agent reasons over. The document model matches how current frameworks persist memory. &lt;a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/" rel="noopener noreferrer"&gt;Vector Search&lt;/a&gt; and &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/hybrid-search/" rel="noopener noreferrer"&gt;Search&lt;/a&gt; cover recall. &lt;a href="https://www.mongodb.com/docs/manual/core/index-ttl/" rel="noopener noreferrer"&gt;TTL indexes&lt;/a&gt; and &lt;a href="https://www.mongodb.com/docs/manual/changestreams/" rel="noopener noreferrer"&gt;change streams&lt;/a&gt; cover retention and event-driven memory extraction. &lt;a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/graph-rag/" rel="noopener noreferrer"&gt;GraphRAG&lt;/a&gt; covers relationship-heavy data. The result is not "a vector store with extra features." It is a memory layer that can also be the system of record. That is why MongoDB works as the brain of a modern AI application.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>ai</category>
      <category>development</category>
      <category>agents</category>
    </item>
    <item>
      <title>Cloudflare + MongoDB: How to fix 'Error: Dynamic require of "punycode/" is not supported'</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Wed, 31 Dec 2025 13:50:50 +0000</pubDate>
      <link>https://dev.to/alexbevi/cloudflare-mongodb-how-to-fix-error-dynamic-require-of-punycode-is-not-supported-1hmh</link>
      <guid>https://dev.to/alexbevi/cloudflare-mongodb-how-to-fix-error-dynamic-require-of-punycode-is-not-supported-1hmh</guid>
      <description>&lt;p&gt;If you've followed my &lt;a href="https://alexbevi.com/blog/2025/03/25/cloudflare-workers-and-mongodb/" rel="noopener noreferrer"&gt;previous post&lt;/a&gt; to try and connect to MongoDB from Cloudflare workers, it's possible you've come across the following issue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Dynamic require of "punycode/" is not supported
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The TL;DR is there is an issue with how &lt;code&gt;@cloudflare/vite-plugin&lt;/code&gt; is &lt;a href="https://github.com/jsdom/tr46/pull/73" rel="noopener noreferrer"&gt;processing an import with a trailing slash within the &lt;code&gt;tr46&lt;/code&gt; library&lt;/a&gt;, which is a transitive dependency of the MongoDB Node.js driver. The current solution is to patch this out until a proper fix is in place.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reproduction
&lt;/h3&gt;

&lt;p&gt;Let's begin with a new application we can use as a minimum reproduction. Chances are you've already got an application ready that's hitting this issue, but if not we can verify this behavior by simply &lt;a href="https://developers.cloudflare.com/workers/framework-guides/web-apps/react-router/" rel="noopener noreferrer"&gt;creating a new React Router app using &lt;code&gt;create-cloudflare&lt;/code&gt;&lt;/a&gt; as follows, then adding the MongoDB Node.js driver as a dependency and importing it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create a new react router app&lt;/span&gt;
npm create cloudflare@latest &lt;span class="nt"&gt;--&lt;/span&gt; my-react-router-app &lt;span class="nt"&gt;--framework&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;react-router
&lt;span class="nb"&gt;cd &lt;/span&gt;my-react-router-app
&lt;span class="c"&gt;# install mongodb&lt;/span&gt;
npm &lt;span class="nb"&gt;install &lt;/span&gt;mongodb &lt;span class="nt"&gt;--save&lt;/span&gt;
&lt;span class="c"&gt;# prepend an import to the workers/app.ts file&lt;/span&gt;
&lt;span class="nb"&gt;printf&lt;/span&gt; &lt;span class="s1"&gt;'import { MongoClient } from "mongodb";\n%s'&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;workers/app.ts&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; workers/app.ts
&lt;span class="c"&gt;# update wrangler.jsonc with compatibility flags to support SSR&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="s1"&gt;'/"compatibility_date": "2025-04-04"/a\
  "compatibility_flags": ["nodejs_compat"],'&lt;/span&gt; wrangler.jsonc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With a freshly bootstrapped application, let's try running it to see what happens.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run dev
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; dev
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; react-router dev

11:15:07 AM &lt;span class="o"&gt;[&lt;/span&gt;vite] &lt;span class="o"&gt;(&lt;/span&gt;ssr&lt;span class="o"&gt;)&lt;/span&gt; Re-optimizing dependencies because vite config has changed
11:15:08 AM &lt;span class="o"&gt;[&lt;/span&gt;vite] &lt;span class="o"&gt;(&lt;/span&gt;ssr&lt;span class="o"&gt;)&lt;/span&gt; ✨ new dependencies optimized: mongodb
11:15:08 AM &lt;span class="o"&gt;[&lt;/span&gt;vite] &lt;span class="o"&gt;(&lt;/span&gt;ssr&lt;span class="o"&gt;)&lt;/span&gt; ✨ optimized dependencies changed. reloading
&lt;span class="o"&gt;[&lt;/span&gt;vite] program reload
Error: Dynamic require of &lt;span class="s2"&gt;"punycode/"&lt;/span&gt; is not supported
    at null.&amp;lt;anonymous&amp;gt; &lt;span class="o"&gt;(&lt;/span&gt;/Users/alex/Temp/my-react-router-app/node_modules/.vite/deps_ssr/chunk-PLDDJCW6.js:11:9&lt;span class="o"&gt;)&lt;/span&gt;
    at node_modules/tr46/index.js &lt;span class="o"&gt;(&lt;/span&gt;/Users/alex/Temp/my-react-router-app/node_modules/tr46/index.js:3:18&lt;span class="o"&gt;)&lt;/span&gt;
    at __require2 &lt;span class="o"&gt;(&lt;/span&gt;/Users/alex/Temp/my-react-router-app/node_modules/.vite/deps_ssr/chunk-PLDDJCW6.js:17:50&lt;span class="o"&gt;)&lt;/span&gt;
    at node_modules/whatwg-url/lib/url-state-machine.js &lt;span class="o"&gt;(&lt;/span&gt;/Users/alex/Temp/my-react-router-app/node_modules/whatwg-url/lib/url-state-machine.js:2:14&lt;span class="o"&gt;)&lt;/span&gt;
    at __require2 &lt;span class="o"&gt;(&lt;/span&gt;/Users/alex/Temp/my-react-router-app/node_modules/.vite/deps_ssr/chunk-PLDDJCW6.js:17:50&lt;span class="o"&gt;)&lt;/span&gt;
    at node_modules/whatwg-url/lib/URL-impl.js &lt;span class="o"&gt;(&lt;/span&gt;/Users/alex/Temp/my-react-router-app/node_modules/whatwg-url/lib/URL-impl.js:2:13&lt;span class="o"&gt;)&lt;/span&gt;
    at __require2 &lt;span class="o"&gt;(&lt;/span&gt;/Users/alex/Temp/my-react-router-app/node_modules/.vite/deps_ssr/chunk-PLDDJCW6.js:17:50&lt;span class="o"&gt;)&lt;/span&gt;
    at node_modules/whatwg-url/lib/URL.js &lt;span class="o"&gt;(&lt;/span&gt;/Users/alex/Temp/my-react-router-app/node_modules/whatwg-url/lib/URL.js:499:14&lt;span class="o"&gt;)&lt;/span&gt;
    at __require2 &lt;span class="o"&gt;(&lt;/span&gt;/Users/alex/Temp/my-react-router-app/node_modules/.vite/deps_ssr/chunk-PLDDJCW6.js:17:50&lt;span class="o"&gt;)&lt;/span&gt;
    at node_modules/whatwg-url/webidl2js-wrapper.js &lt;span class="o"&gt;(&lt;/span&gt;/Users/alex/Temp/my-react-router-app/node_modules/whatwg-url/webidl2js-wrapper.js:3:13&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;cause]: undefined
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Vite is complaining that the dynamic require of "punycode/" is not supported. The trailing slash following "punycode" is interesting, but we should first see where it's being imported. We can do this by using &lt;a href="https://docs.npmjs.com/cli/v7/commands/npm-ls" rel="noopener noreferrer"&gt;&lt;code&gt;npm ls&lt;/code&gt;&lt;/a&gt; to quickly narrow down usage of &lt;code&gt;punycode&lt;/code&gt; to the &lt;code&gt;tr46&lt;/code&gt; library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;ls &lt;/span&gt;punycode
my-react-router-app@ /Users/alex/Temp/my-react-router-app
└─┬ mongodb@7.0.0
  └─┬ mongodb-connection-string-url@7.0.0
    └─┬ whatwg-url@14.2.0
      └─┬ tr46@5.1.1
        └── punycode@2.3.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inspecting the &lt;code&gt;tr46&lt;/code&gt; library at &lt;a href="https://github.com/jsdom/tr46/blob/main/index.js" rel="noopener noreferrer"&gt;https://github.com/jsdom/tr46/blob/main/index.js&lt;/a&gt; shows the trailing slash on the import as well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;use strict&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;punycode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;punycode/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// &amp;lt;--- this is the line in question&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;regexes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./lib/regexes.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mappingTable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./lib/mappingTable.json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;STATUS_MAPPING&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./lib/statusMapping.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I initially tried to open a PR at &lt;a href="https://github.com/jsdom/tr46/pull/73" rel="noopener noreferrer"&gt;https://github.com/jsdom/tr46/pull/73&lt;/a&gt; to sort this out, but the maintainer points out that the issue is with Vite, so we'll need to look elsewhere for a solution. This change (introduced in commit &lt;a href="https://github.com/jsdom/tr46/commit/fef6e95243caaa0e46a1aa42fa21af6caef11e51" rel="noopener noreferrer"&gt;&lt;code&gt;fef6e95&lt;/code&gt;&lt;/a&gt;) was likely done to address &lt;code&gt;punycode&lt;/code&gt; deprecation warnings such as that described in &lt;a href="https://github.com/jsdom/tr46/issues/63" rel="noopener noreferrer"&gt;https://github.com/jsdom/tr46/issues/63&lt;/a&gt;. For more info on those deprecations see &lt;a href="https://medium.com/@asimabas96/solving-the-punycode-module-is-deprecated-issue-in-node-js-93437637948a" rel="noopener noreferrer"&gt;"Solving the \"Punycode Module is Deprecated\" Issue in Node.js"&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Patching
&lt;/h3&gt;

&lt;p&gt;We're going to solve this issue in a roundabout fashion using &lt;a href="https://www.npmjs.com/package/patch-package" rel="noopener noreferrer"&gt;&lt;code&gt;patch-package&lt;/code&gt;&lt;/a&gt; to modify the &lt;code&gt;punycode&lt;/code&gt; import directly in our &lt;code&gt;node_packages&lt;/code&gt; and then have a &lt;code&gt;postinstall&lt;/code&gt; script that will ensure the patch is consistently applied.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# install patch-package&lt;/span&gt;
npm &lt;span class="nb"&gt;install &lt;/span&gt;patch-package
&lt;span class="c"&gt;# update package.json to run patch-package as well as cf-typegen (which is there by default)&lt;/span&gt;
npm pkg &lt;span class="nb"&gt;set &lt;/span&gt;scripts.postinstall&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"patch-package &amp;amp;&amp;amp; npm run cf-typegen"&lt;/span&gt;
&lt;span class="c"&gt;# update node_modules/tr46/index.js to remove the trailing slash from the import&lt;/span&gt;
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="s1"&gt;'s/require("punycode\/")/require("punycode")/g'&lt;/span&gt; node_modules/tr46/index.js
&lt;span class="c"&gt;# create a patch for the tr46 package based on the above change&lt;/span&gt;
npx patch-package tr46
&lt;span class="c"&gt;# reinstall and apply patches&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That should do it! When we run &lt;code&gt;npm install&lt;/code&gt; it will also run the &lt;code&gt;postinstall&lt;/code&gt;, which will apply the patch we just created.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Though patching transient dependencies to work around an issue like this is not ideal, it does offer a path forward for anyone hitting this specific error. To summarize what we did to address the issue:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install the &lt;code&gt;patch-package&lt;/code&gt; library (&lt;code&gt;npm install patch-package&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Update your &lt;code&gt;package.json&lt;/code&gt;'s &lt;code&gt;scripts.postinstall&lt;/code&gt; to prepend a &lt;code&gt;patch-package&lt;/code&gt; script to any &lt;code&gt;postinstall&lt;/code&gt; scripts that may already be present&lt;/li&gt;
&lt;li&gt;Modify &lt;code&gt;node_modules/tr46/index.js&lt;/code&gt; to remove the trailing &lt;code&gt;/&lt;/code&gt; from &lt;code&gt;require("punycode/")&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create the patch by running &lt;code&gt;npx patch-package tr46&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Ensure the patch is applied by running &lt;code&gt;npm install&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Hopefully we can get this sorted out more cleanly (reported as &lt;a href="https://github.com/cloudflare/workers-sdk/issues/11751" rel="noopener noreferrer"&gt;https://github.com/cloudflare/workers-sdk/issues/11751&lt;/a&gt;), but in the meantime feel free to use this approach if you find it suitable.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>cloudflare</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>MongoDB Drivers and Network Compression</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Thu, 13 Nov 2025 16:19:50 +0000</pubDate>
      <link>https://dev.to/alexbevi/mongodb-drivers-and-network-compression-4b7</link>
      <guid>https://dev.to/alexbevi/mongodb-drivers-and-network-compression-4b7</guid>
      <description>&lt;p&gt;MongoDB's drivers communicate with a MongoDB process using the &lt;a href="https://www.mongodb.com/docs/manual/reference/mongodb-wire-protocol" rel="noopener noreferrer"&gt;Wire Protocol&lt;/a&gt;, which is a simple socket-based, request-response style protocol that primarily uses the &lt;a href="https://www.mongodb.com/docs/manual/reference/mongodb-wire-protocol/#op_msg" rel="noopener noreferrer"&gt;&lt;code&gt;OP_MSG&lt;/code&gt;&lt;/a&gt; opcode (though &lt;a href="https://www.mongodb.com/docs/v6.0/release-notes/6.0-compatibility/#std-label-legacy-op-codes-removed" rel="noopener noreferrer"&gt;prior to MongoDB 6.0&lt;/a&gt; there were a number of additional &lt;a href="https://www.mongodb.com/docs/manual/legacy-opcodes" rel="noopener noreferrer"&gt;legacy opcodes&lt;/a&gt;). Since the contents of &lt;code&gt;OP_MSG&lt;/code&gt; messages was uncompressed, starting with MongoDB 3.4 a new opcode was introduced that would enable the Wire Protocol to support compressed messages as well: &lt;a href="https://www.mongodb.com/docs/manual/reference/mongodb-wire-protocol/#op_compressed" rel="noopener noreferrer"&gt;&lt;code&gt;OP_COMPRESSED&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;All official &lt;a href="https://www.mongodb.com/docs/drivers/" rel="noopener noreferrer"&gt;MongoDB drivers&lt;/a&gt; allow you to enable and configure &lt;a href="https://www.mongodb.com/docs/manual/reference/connection-string-options/#compression-options" rel="noopener noreferrer"&gt;compression options&lt;/a&gt; via the connection string. To use any of the available compressors, they'd need to first be &lt;a href="https://www.mongodb.com/docs/manual/reference/configuration-options/#mongodb-setting-net.compression.compressors" rel="noopener noreferrer"&gt;enabled for each &lt;code&gt;mongod&lt;/code&gt; or &lt;code&gt;mongos&lt;/code&gt; instance through the &lt;code&gt;net.compression.compressors&lt;/code&gt; option&lt;/a&gt;, however all compressors are currently enabled by default, so you'd really only need to modify this to remove support for a given compressor.&lt;/p&gt;

&lt;p&gt;Enabling network compression for your workload is as easy as appending &lt;code&gt;compressors=xxx&lt;/code&gt; (where &lt;code&gt;xxx&lt;/code&gt; is one or more compressors as comma-separated values) to your connection string. For every &lt;code&gt;MongoClient&lt;/code&gt; created with this connection string, &lt;em&gt;almost all&lt;/em&gt;&lt;sup id="fnref1"&gt;1&lt;/sup&gt; database commands will be compressed, which can result in massive reductions in the amount of data that needs to be sent back and forth to a MongoDB process.&lt;/p&gt;

&lt;p&gt;As a simple demonstration, I've instrumented a Node.js-based workload (see &lt;a href="https://github.com/alexbevi/node-tcp-metrics" rel="noopener noreferrer"&gt;alexbevi/node-tcp-metrics&lt;/a&gt;) to hook into &lt;a href="https://nodejs.org/api/net.html#netcreateconnection" rel="noopener noreferrer"&gt;&lt;code&gt;net.createConnection()&lt;/code&gt;&lt;/a&gt;, &lt;a href="https://nodejs.org/api/net.html#class-netserver" rel="noopener noreferrer"&gt;&lt;code&gt;net.Server&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://nodejs.org/api/tls.html#tlsconnectoptions-callback" rel="noopener noreferrer"&gt;&lt;code&gt;tls.connect()&lt;/code&gt;&lt;/a&gt; to track the number of bytes being sent/received:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./tcp-metrics.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;on&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./tcp-metrics.js&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;MongoClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mongodb&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Chance&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;chance&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;uri&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;MONGODB_URI&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;MONGODB_URI environment variable is not set&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;delay&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;number&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ms&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;db&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;testdb&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;collection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;collection&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;testcollection&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Chance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;42&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Fixed seed for deterministic generation&lt;/span&gt;

    &lt;span class="c1"&gt;// Generate a complex document structure that's approximately 5MB&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;itemCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Adjust to control size&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;length&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;itemCount&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;name&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;email&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;street&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;address&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="na"&gt;city&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;city&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;state&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="na"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;zip&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="na"&gt;country&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;country&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;phone&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;phone&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="na"&gt;company&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;company&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="na"&gt;bio&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;paragraph&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;sentences&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
      &lt;span class="na"&gt;avatar&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;url&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;length&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;word&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt;
      &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;createdAt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toISOString&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="na"&gt;lastLogin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;timestamp&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="na"&gt;preferences&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;theme&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pickone&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dark&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;light&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;auto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt;
        &lt;span class="na"&gt;language&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;locale&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="na"&gt;notifications&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;chance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}))&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="na"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;any&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;large-doc&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insertOne&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findOne&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;large-doc&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;buf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Document insert and read complete, doc size (bytes):&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;deleteOne&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;large-doc&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;MongoDB error:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;})();&lt;/span&gt;

&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;socketSummary&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When this is run, the workload will create a complex JSON document using &lt;a href="https://chancejs.com/" rel="noopener noreferrer"&gt;Chance&lt;/a&gt;, write it to a &lt;a href="https://www.mongodb.com/products/platform/atlas-database" rel="noopener noreferrer"&gt;MongoDB Atlas Database&lt;/a&gt;, then read it back before deleting it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ MONGODB_URI&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"mongodb+srv://USER:PASS@abc.cdefg.mongodb.net/"&lt;/span&gt; npm run dev

Document insert and &lt;span class="nb"&gt;read complete&lt;/span&gt;, doc size &lt;span class="o"&gt;(&lt;/span&gt;bytes&lt;span class="o"&gt;)&lt;/span&gt;: 4725754
&lt;span class="o"&gt;{&lt;/span&gt; rx: 5058704, tx: 5058304, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.108:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1093, tx: 523, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.101:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1093, tx: 523, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.116:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1117, tx: 523, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.108:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;There are 4 open sockets in the example as the default Atlas configuration is a 3 member &lt;a href="https://www.mongodb.com/docs/manual/replication/" rel="noopener noreferrer"&gt;replica set&lt;/a&gt;. The driver has opened one socket to send commands to the server, and has also created dedicated monitoring connections to each host. If the workload were to remain active and not exit immediately, another 3 RTT connections would also be opened (one to each host in the replica set) for a total of 7 sockets. \&lt;br&gt;
See the &lt;a href="https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-monitoring.md" rel="noopener noreferrer"&gt;Server Monitoring specification&lt;/a&gt;, or &lt;a href="https://dev.to%20post_url%202023-07-04-how-many-connections-is-my-application-establishing-to-my-mongodb-cluster%20"&gt;"How Many Connections is My Application Establishing to My MongoDB Cluster?"&lt;/a&gt; for more detail.&lt;br&gt;
{: .prompt-info }&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;MongoDB supports 3 possible network compressors: &lt;a href="https://zlib.net/" rel="noopener noreferrer"&gt;zlib&lt;/a&gt;, ZStandard (&lt;a href="https://facebook.github.io/zstd/" rel="noopener noreferrer"&gt;zstd&lt;/a&gt;) and &lt;a href="https://google.github.io/snappy/" rel="noopener noreferrer"&gt;Snappy&lt;/a&gt;. zlib is always supported out of the box, however some drivers may require an additional package to support additional compressors. For example, when using MongoDB's &lt;a href="https://www.mongodb.com/docs/drivers/node/current/" rel="noopener noreferrer"&gt;Node.js driver&lt;/a&gt;, the following would be required from &lt;code&gt;npm&lt;/code&gt; to support snappy and zstd:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/snappy" rel="noopener noreferrer"&gt;&lt;code&gt;snappy&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/@mongodb-js/zstd" rel="noopener noreferrer"&gt;&lt;code&gt;@mongodb-js/zstd&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Our workload is using the default &lt;a href="https://www.mongodb.com/docs/manual/core/read-preference/" rel="noopener noreferrer"&gt;read preference&lt;/a&gt;, so with a baseline of &lt;code&gt;rx: 5058704, tx: 5058304&lt;/code&gt; being sent to and received from the &lt;a href="https://www.mongodb.com/docs/manual/core/replica-set-primary/" rel="noopener noreferrer"&gt;replica set primary&lt;/a&gt;, let's explore the impact of network compression.&lt;/p&gt;

&lt;h3&gt;
  
  
  zlib
&lt;/h3&gt;

&lt;p&gt;zlib is a software library used for data compression as well as a data format. zlib was written by Jean-loup Gailly and Mark Adler and implements the DEFLATE compression algorithm used in their gzip file compression program. The first public version of Zlib, 0.9, was released on 1 May 1995 and was originally intended for use with the libpng image library. It is free software, distributed under the zlib License.&lt;/p&gt;

&lt;p&gt;To test this compressor we append &lt;code&gt;compressors=zlib&lt;/code&gt; to our connection string and re-run our script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ MONGODB_URI&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"mongodb+srv://USER:PASS@abc.cdefg.mongodb.net/?compressors=zlib"&lt;/span&gt; npm run dev

Document insert and &lt;span class="nb"&gt;read complete&lt;/span&gt;, doc size &lt;span class="o"&gt;(&lt;/span&gt;bytes&lt;span class="o"&gt;)&lt;/span&gt;: 4725754
&lt;span class="o"&gt;{&lt;/span&gt; rx: 2417301, tx: 2361623, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.108:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1147, tx: 515, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.108:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1123, tx: 518, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.101:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1123, tx: 518, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.116:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;zlib&lt;/code&gt; compression enabled we can see about a &lt;strong&gt;52% decrease&lt;/strong&gt; in the amount of data sent over the wire for this workload:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uncompressed:      rx: 5058704, tx: 5058304
compressed (zlib): rx: 2417301, tx: 2361623
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ZStandard (zstd)
&lt;/h3&gt;

&lt;p&gt;Zstandard is a lossless data compression algorithm developed by Yann Collet at Facebook. Zstd is the corresponding reference implementation in C, released as open-source software on 31 August 2016. The algorithm was published in 2018 as RFC 8478, which also defines an associated media type "application/zstd", filename extension "zst", and HTTP content encoding "zstd".&lt;/p&gt;

&lt;p&gt;To test this compressor we append &lt;code&gt;compressors=zstd&lt;/code&gt; to our connection string and re-run our script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ MONGODB_URI&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"mongodb+srv://USER:PASS@abc.cdefg.mongodb.net/?compressors=zstd"&lt;/span&gt; npm run dev

Document insert and &lt;span class="nb"&gt;read complete&lt;/span&gt;, doc size &lt;span class="o"&gt;(&lt;/span&gt;bytes&lt;span class="o"&gt;)&lt;/span&gt;: 4725754
&lt;span class="o"&gt;{&lt;/span&gt; rx: 2395239, tx: 2394798, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.108:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1147, tx: 519, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.108:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1123, tx: 519, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.101:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1123, tx: 519, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.116:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;zstd&lt;/code&gt; compression enabled we can see about a &lt;strong&gt;53% decrease&lt;/strong&gt; in the amount of data sent over the wire for this workload:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uncompressed:      rx: 5058704, tx: 5058304
compressed (zstd): rx: 2395239, tx: 2394798
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Snappy
&lt;/h3&gt;

&lt;p&gt;Snappy (previously known as Zippy) is a fast data compression and decompression library written in C++ by Google based on ideas from LZ77 and open-sourced in 2011. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. Compression speed is 250 MB/s and decompression speed is 500 MB/s using a single core of a circa 2011 "Westmere" 2.26 GHz Core i7 processor running in 64-bit mode. The compression ratio is 20–100% lower than gzip.&lt;/p&gt;

&lt;p&gt;To test this compressor we append &lt;code&gt;compressors=snappy&lt;/code&gt; to our connection string and re-run our script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ MONGODB_URI&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"mongodb+srv://USER:PASS@abc.cdefg.mongodb.net/?compressors=snappy"&lt;/span&gt; npm run dev

Document insert and &lt;span class="nb"&gt;read complete&lt;/span&gt;, doc size &lt;span class="o"&gt;(&lt;/span&gt;bytes&lt;span class="o"&gt;)&lt;/span&gt;: 4725754
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1125, tx: 527, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.116:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 3807095, tx: 3797837, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.108:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1149, tx: 527, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.108:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt; rx: 1125, tx: 527, label: &lt;span class="s1"&gt;'xxx.yyy.zzz.101:27017'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;snappy&lt;/code&gt; compression enabled we can see about a &lt;strong&gt;25% decrease&lt;/strong&gt; in the amount of data sent over the wire for this workload:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;uncompressed:        rx: 5058704, tx: 5058304
compressed (snappy): rx: 3807095, tx: 3797837
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Though there may be a need for an additional dependency, Zstandard compression is likely the best option as it will provide good compression with a low memory footprint. For Node.js specifically this requirement will likely go away once the driver's minimum runtime version rises to Node 24 as &lt;a href="https://nodejs.org/en/blog/release/v23.8.0#support-for-the-zstd-compression-algorithm" rel="noopener noreferrer"&gt;zstd support was added in 23.8.0&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you're running your workload in AWS (or anywhere really), &lt;a href="https://aws.amazon.com/blogs/architecture/overview-of-data-transfer-costs-for-common-architectures/" rel="noopener noreferrer"&gt;data transfer costs&lt;/a&gt; will contribute to your overall costs. You can use tools like the &lt;a href="https://calculator.aws" rel="noopener noreferrer"&gt;AWS pricing calculator&lt;/a&gt; to dig into cost projections, but given you can potentially slash those in half (at least for the data going to/from your cluster) with a simple connection string update it makes MongoDB a more cost-effective option for your application.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;small&gt;Per the &lt;a href="https://github.com/mongodb/specifications/blob/master/source/compression/OP_COMPRESSED.md#messages-not-allowed-to-be-compressed" rel="noopener noreferrer"&gt;Wire Compression specification&lt;/a&gt;, some commands should not be compressed. These include &lt;code&gt;hello&lt;/code&gt;/&lt;code&gt;ismaster&lt;/code&gt;, &lt;code&gt;saslStart&lt;/code&gt;, &lt;code&gt;saslContinue&lt;/code&gt;, &lt;code&gt;getnonce&lt;/code&gt;, &lt;code&gt;authenticate&lt;/code&gt;, &lt;code&gt;createUser&lt;/code&gt;, &lt;code&gt;updateUser&lt;/code&gt;, &lt;code&gt;copydbSaslStart&lt;/code&gt;, &lt;code&gt;copydbgetnonce&lt;/code&gt; and &lt;code&gt;copydb&lt;/code&gt;.&lt;/small&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>mongodb</category>
      <category>networking</category>
      <category>compression</category>
      <category>javascript</category>
    </item>
    <item>
      <title>MongoDB Driver Compatibility with MongoDB Servers</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Tue, 05 Aug 2025 09:52:39 +0000</pubDate>
      <link>https://dev.to/alexbevi/mongodb-driver-compatibility-with-mongodb-servers-4m11</link>
      <guid>https://dev.to/alexbevi/mongodb-driver-compatibility-with-mongodb-servers-4m11</guid>
      <description>&lt;p&gt;MongoDB server versions &lt;a href="https://www.mongodb.com/legal/support-policy/lifecycles" rel="noopener noreferrer"&gt;eventually reach EOL&lt;/a&gt; - as MongoDB 6.0 did on July 31, 2025. If your workload is running in MongoDB Atlas, the major version of your cluster will be automatically upgraded, but what if you haven't upgraded your application, its dependencies or the runtime environment? Will your application break? Is it still compatible? I wrote about this previously at &lt;a href="https://dev.to%20post_url%202023-01-13-will-upgrading-my-mongodb-server-version-break-my-application%20"&gt;&lt;em&gt;"Will Upgrading My MongoDB Server Version Break My Application?"&lt;/em&gt;&lt;/a&gt;, but there are still a lot of questions that pop up regarding driver compatibility so I wanted to go further.&lt;/p&gt;

&lt;p&gt;Though good dependency management hygiene is important, it's a time consuming process that can require extensive testing so you typically want to do it on your own terms - not because of a service upgrade.&lt;/p&gt;

&lt;p&gt;Let's assume your application uses &lt;a href="https://github.com/mongodb/node-mongodb-native/releases/tag/v4.17.2" rel="noopener noreferrer"&gt;v4.17.2&lt;/a&gt; of the MongoDB &lt;a href="https://mongodb-node.netlify.app/docs/drivers/node/current/" rel="noopener noreferrer"&gt;Node.js driver&lt;/a&gt;, but your application has been humming along for some time without issue. You got an email indicating your cluster was going to be upgraded from MongoDB 6.0 to MongoDB 7.0, but based on the &lt;a href="https://mongodb-node.netlify.app/docs/drivers/node/current/reference/compatibility/" rel="noopener noreferrer"&gt;driver compatibility table&lt;/a&gt; that version of the driver isn't even present!&lt;/p&gt;

&lt;h2&gt;
  
  
  What compatibility tables actually mean
&lt;/h2&gt;

&lt;p&gt;MongoDB drivers are constantly being updated to add support for new features of the MongoDB server, as well as address bugs/regressions and improve performance. The compatibility tables (such as the example below) are simply a reflection of what versions of the driver have previously had their test suite run against a version of the MongoDB server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbi1shqwkqcv42xbe1i6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbi1shqwkqcv42xbe1i6.png" alt="Node.js driver compatibility matrix"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The MongoDB drivers are all built from a set of &lt;a href="https://github.com/mongodb/specifications" rel="noopener noreferrer"&gt;common specifications&lt;/a&gt;, which are updated periodically as new MongoDB server features necessitate changes. For example, &lt;a href="https://www.mongodb.com/docs/manual/release-notes/7.0/#atlas-search-index-management" rel="noopener noreferrer"&gt;MongoDB 7.0 introduced Atlas Search Index Management&lt;/a&gt;, which resulted in the &lt;a href="https://github.com/mongodb/specifications/blob/master/source/index-management/index-management.md" rel="noopener noreferrer"&gt;index management specifications&lt;/a&gt; being updated to define APIs drivers can implement to support the new database commands required to perform this new function.&lt;/p&gt;

&lt;p&gt;If the version of the driver being used doesn't contain support for MongoDB 7.0, new APIs such as &lt;a href="https://mongodb.github.io/node-mongodb-native/6.17/classes/Collection.html#createSearchIndex" rel="noopener noreferrer"&gt;&lt;code&gt;Collection#createSearchIndex&lt;/code&gt;&lt;/a&gt;wouldn't be directly available - but if you don't need this MongoDB 7.0 feature, your existing application using v4.17.2 of the Node.js driver would continue to function as expected.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Since &lt;a href="https://www.mongodb.com/docs/manual/reference/command/createSearchIndexes/" rel="noopener noreferrer"&gt;&lt;code&gt;createSearchIndexes&lt;/code&gt;&lt;/a&gt;is a database command, even using a version of the driver that didn't have convenient APIs for interacting with the feature could still be used to &lt;a href="https://mongodb-node.netlify.app/docs/drivers/node/current/run-command/" rel="noopener noreferrer"&gt;run the database command directly&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;


&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;commandDoc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;createSearchIndexes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;contacts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;indexes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;searchIndex01&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;definition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;mappings&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;dynamic&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}]&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;myDB&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commandDoc&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What happens if I don't update my drivers
&lt;/h2&gt;

&lt;p&gt;The most likely outcome is - &lt;em&gt;nothing&lt;/em&gt;. Your application will continue to connect to your cluster, serialize and transmit database commands and receive and deserialize command responses. Even if the version of your driver is not present on the compatibility matrix, the &lt;a href="https://www.mongodb.com/docs/manual/reference/mongodb-wire-protocol/" rel="noopener noreferrer"&gt;MongoDB Wire Protocol&lt;/a&gt; the drivers use to communicate with your cluster hardly ever changes.&lt;/p&gt;

&lt;p&gt;As such, operationally your workload &lt;em&gt;should&lt;/em&gt; continue to function as expected. Since the MongoDB server version has changed the performance profile of your workload &lt;em&gt;may&lt;/em&gt; change, but this would not likely be a result of the driver remaining unchanged.&lt;/p&gt;

&lt;p&gt;Older drivers may not receive security updates or performance improvements - however this is true of your application's dependencies. Plan to update your driver, but rest assured that doing so should be compatible with your application or business' maintenance schedule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Can my application actually break if I do nothing
&lt;/h2&gt;

&lt;p&gt;Yes - but for VERY specific reasons, all of which would be documented thoroughly and communicated prior to a major MongoDB server release.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wire Protocol changes
&lt;/h3&gt;

&lt;p&gt;As mentioned previously, the wire protocol is extremely stable and rarely changes, however with the release of MongoDB 6.0, all &lt;a href="https://www.mongodb.com/docs/manual/release-notes/6.0-compatibility/#std-label-legacy-op-codes-removed" rel="noopener noreferrer"&gt;legacy opcodes were removed&lt;/a&gt;. This meant applications using drivers that had not been updated since MongoDB 3.4 (which reached EOL in 2020) would stop working as soon as their cluster was upgraded to 6.0.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;THIS IS THE ONLY WIRE PROTOCOL CHANGE OF THIS NATURE TO DATE&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Command removals
&lt;/h3&gt;

&lt;p&gt;On occasion, database commands may be replaced or removed. This will not happen prior to a deprecation period (at least one major release prior). This happened previously when &lt;a href="https://www.mongodb.com/docs/manual/release-notes/5.0-compatibility/#removed-commands" rel="noopener noreferrer"&gt;MongoDB 5.0 removed a number of deprecated commands&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If applications happen to be using those commands directly, once the MongoDB server is upgraded to a version that removes support for them, those applications would throw errors where those commands are used - such as the following example (using &lt;code&gt;mongosh&lt;/code&gt;, which is built using the Node.js driver):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;version&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="mf"&gt;4.4&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;29&lt;/span&gt;
&lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;runCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;resetError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;version&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="mf"&gt;5.0&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="mi"&gt;31&lt;/span&gt;
&lt;span class="nx"&gt;test&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;runCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;resetError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="nx"&gt;MongoServerError&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;CommandNotFound&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="nx"&gt;no&lt;/span&gt; &lt;span class="nx"&gt;such&lt;/span&gt; &lt;span class="nx"&gt;command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;resetError&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each major MongoDB server release will contain both release notes and compatibility changes. Make sure to review the compatibility changes to ensure if there are any command removals, they don't represent commands your application is using.&lt;/p&gt;

&lt;p&gt;See the following for reference:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.mongodb.com/docs/manual/release-notes/8.0-compatibility/" rel="noopener noreferrer"&gt;MongoDB 8.0 Compatibility Changes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mongodb.com/docs/manual/release-notes/7.0-compatibility/" rel="noopener noreferrer"&gt;MongoDB 7.0 Compatibility Changes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mongodb.com/docs/manual/release-notes/6.0-compatibility/" rel="noopener noreferrer"&gt;MongoDB 6.0 Compatibility Changes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.mongodb.com/docs/manual/release-notes/5.0-compatibility/" rel="noopener noreferrer"&gt;MongoDB 5.0 Compatibility Changes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Not having a driver considered "compatible" doesn't mean the application won't continue to work&lt;/li&gt;
&lt;li&gt;Compatibility implies "feature compatibility" - as in "new" features of the server version listed&lt;/li&gt;
&lt;li&gt;Not upgrading your driver &lt;em&gt;shouldn't&lt;/em&gt; result in your application breaking&lt;/li&gt;
&lt;li&gt;There are scenarios where not upgrading the driver will break your application, but these are few, far between and well documented&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are multiple benefits to keeping your application's dependencies up to date, but there can also be drawbacks. As such it's important to test upgrades in a lower environment to ensure as many potential issues can be caught prior to applying changes in production environments.&lt;/p&gt;

&lt;p&gt;If you're working with Node.js, consider tooling like &lt;a href="https://github.com/pilotpirxie/dependency-time-machine" rel="noopener noreferrer"&gt;&lt;code&gt;dependency-time-machine&lt;/code&gt;&lt;/a&gt; to help with sequential dependency update automation. Since your dependency graph is likely complex, this type of approach can help update your application incrementally in a way that may minimize interdependency compatibility issues.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>database</category>
      <category>programming</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Call Stack, But Make It Async!</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Fri, 12 Jul 2024 18:10:02 +0000</pubDate>
      <link>https://dev.to/alexbevi/call-stack-but-make-it-async-3g7i</link>
      <guid>https://dev.to/alexbevi/call-stack-but-make-it-async-3g7i</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Written by &lt;a class="mentioned-user" href="https://dev.to/nbbeeken"&gt;@nbbeeken&lt;/a&gt; (&lt;a href="https://nbbeeken.github.io/" rel="noopener noreferrer"&gt;Blog&lt;/a&gt;, &lt;a href="https://github.com/nbbeeken" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;In a recent release of the MongoDB Node.js driver (&lt;a href="https://github.com/mongodb/node-mongodb-native/releases/tag/v6.5.0" rel="noopener noreferrer"&gt;v6.5.0&lt;/a&gt;) the team completed the effort of getting all our asynchronous operations to report an accurate asynchronous stack trace to assist in pinpointing error origination. Here, I'll walk you through what this feature of JavaScript is and how to obtain it at the low price of zero-cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Calls and how to stack them 📚
&lt;/h2&gt;

&lt;p&gt;First, what is a &lt;a href="https://developer.mozilla.org/en-US/docs/Glossary/Call_stack" rel="noopener noreferrer"&gt;call stack&lt;/a&gt;? A call stack is a hidden data structure that stores information about the active subroutines of a program; active subroutines being functions that have been called but have yet to complete execution and return control to the caller. The main function of the call stack is to keep track of the point to which each active subroutine should return control when it finishes executing.&lt;/p&gt;

&lt;p&gt;Let's go through an example, take a program that parses a string from its arguments that is an equation like "2+2" and computes the result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;parseString&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;splitString&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
      &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;stringLength&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;stringToNumber&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;printResult&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of us are familiar with the above procedural paradigm (whether from JavaScript, C, Java, or Python) where each step in the program is synchronous, so our call stack is a clear ordering of dependent procedures. For example, if &lt;code&gt;stringLength&lt;/code&gt; fails, the call stack would contain &lt;code&gt;stringLength&lt;/code&gt;, &lt;code&gt;splitString&lt;/code&gt;, &lt;code&gt;parseString&lt;/code&gt;, and &lt;code&gt;main&lt;/code&gt; as active procedures that have yet to return to their callers. The error system of our runtime uses this stack trace to generate a helpful error trace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;file://addNumbers.mjs:35
    throw new Error('cannot get string length')
          ^
Error: cannot get string length
    at stringLength (file://addNumbers.mjs:35:11)
    at splitString (file://addNumbers.mjs:17:17)
    at parseString (file://addNumbers.mjs:11:19)
    at main (file://addNumbers.mjs:4:5)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Async wrench 🔧
&lt;/h3&gt;

&lt;p&gt;Everything changes when we shift to an asynchronous programming model, as the introduction of asynchronous work means we no longer have strictly dependent procedures. Essentially, async programming is about setting up tasks and adding handling that will be invoked some time later when the task is complete.&lt;/p&gt;

&lt;p&gt;Let's add I/O (a read from standard in) into our program to see how this changes our call stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;readStdin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;handleUserInput&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;// When the user finishes typing&lt;/span&gt;
&lt;span class="nf"&gt;handleUserInput&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;parseString&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;splitString&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;stringLength&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, main's only job is to ask the runtime to read from stdin and invoke a function of our choice when it is done doing so. This means main is no longer an active procedure; it returns leaving it up to the runtime to keep the process running until it has input from stdin to hand back to our function &lt;code&gt;handleUserInput&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here's what the stack trace looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;file://addNumbers.mjs:42
    throw new Error('cannot get string length')
    ^
Error: cannot get string length
    at stringLength (file://addNumbers.mjs:42:11)
    at splitString (file://addNumbers.mjs:24:17)
    at parseString (file://addNumbers.mjs:18:19)
    at ReadStream.handleUserInput (file://addNumbers.mjs:11:5)
    at ReadStream.emit (node:events:511:28)
    at addChunk (node:internal/streams/readable:332:12)
    at readableAddChunk (node:internal/streams/readable:305:9)
    at Readable.push (node:internal/streams/readable:242:10)
    at TTY.onStreamRead (node:internal/stream_base_commons:190:23)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No sign of &lt;code&gt;main&lt;/code&gt;, only &lt;code&gt;handleUserInput&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is a common hazard of asynchronous programming: you are always replacing the record of your active procedures as they are all performing task setup that completes and the callbacks they created are later invoked by the runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  JavaScript 💚
&lt;/h2&gt;

&lt;p&gt;Asynchronous programming has always been at the heart of JS and is one of the central selling points of using Node.js.&lt;/p&gt;

&lt;p&gt;In 2015, the first &lt;a href="https://nodejs.org/en/blog/release/v4.2.0" rel="noopener noreferrer"&gt;Long Term Support version of Node.js was released&lt;/a&gt;, and with it came a stable standard library that popularized a common pattern for handling asynchronous tasks. All asynchronous tasks would accept a callback as their last argument, with the callback taking at least two arguments: an error, and the task's result. The pattern was that if the first argument was &lt;a href="https://developer.mozilla.org/en-US/docs/Glossary/Truthy" rel="noopener noreferrer"&gt;truthy&lt;/a&gt; (an error object) the task failed, and if it was not then the second argument would contain the result.&lt;/p&gt;

&lt;p&gt;Here's a simplified example of a function that reads a file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;readFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;filename.txt&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;file contents&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Node.js callback pattern is ubiquitous and familiar, resulting in many popular libraries such as the &lt;a href="https://www.mongodb.com/docs/drivers/node/current/" rel="noopener noreferrer"&gt;MongoDB Node.js driver&lt;/a&gt; adopting it as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  No throw, only callback 🐕
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcaxrqaigwfkds87osm4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcaxrqaigwfkds87osm4s.png" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;image credit: &lt;a href="https://cupcakelogic.tumblr.com/post/124392369931/she-is-still-learning" rel="noopener noreferrer"&gt;cupcakelogic&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A challenge associated with the callback pattern is the requirement that the implementer keep in mind execution expectations manually, otherwise they can end up with a confusing order of operations.&lt;/p&gt;

&lt;p&gt;Typically this is something that should be abstracted to the runtime or language, which can be broken down as follows:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error handling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Properly implementing the callback pattern means errors are passed as variables to a chain of handlers so they eventually reach the top-level initiator of the async operation. The syntax and keywords &lt;code&gt;throw&lt;/code&gt;/&lt;code&gt;try&lt;/code&gt;/&lt;code&gt;catch&lt;/code&gt; can no longer be used for control flow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;readFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;filename&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="cm"&gt;/* ? */&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// So what's the truth?&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Runtime order&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Callbacks also demand the developers ensure execution order is consistent. If a file is successfully read and the contents are returned in the callback passed to &lt;code&gt;readFile&lt;/code&gt;, that callback will always run after the code that is on the line following &lt;code&gt;readFile&lt;/code&gt;. However, say &lt;code&gt;readFile&lt;/code&gt; is passed an invalid argument, like a number instead of a string for the path. When it invokes the callback with an invalid argument error we would still expect that code to run in the same order as the success case:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;readFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;filename&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;string&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="nf"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;invalid argument&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
       &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="c1"&gt;// open &amp;amp; read file ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nf"&gt;readFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mh"&gt;0xF113&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;cannot read file&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
       &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;contents:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;starting to read file&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above prints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cannot read file Error: invalid argument
starting to read file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whereas when I change &lt;code&gt;readFile&lt;/code&gt; to be called with a non-existent path:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;starting to read file
cannot read file Error: /notAPath.txt Does Not Exist
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is unexpected! The implementer of readFile calls the callback synchronously for an invalid type so readFile does not return until that callback completes. It is fairly easy to write callback accepting functions that inconsistently order their execution in this way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Promises 🤞
&lt;/h3&gt;

&lt;p&gt;Introducing a more structured approach: &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise" rel="noopener noreferrer"&gt;Promises&lt;/a&gt;. A Promise is an object that handles the resolution or rejection of an async operation, mitigating the above issues and allowing for many async operations to be chained together without needing to explicitly pass a finalizer callback through to each API that would indicate when all tasks are done.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// callbacks&lt;/span&gt;
&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;done&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="nx"&gt;client&lt;/span&gt;
   &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;db&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
   &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findOne&lt;/span&gt;&lt;span class="p"&gt;({},&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;done&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
     &lt;span class="p"&gt;}&lt;/span&gt;
     &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
     &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;done&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
   &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// promises&lt;/span&gt;
&lt;span class="nx"&gt;client&lt;/span&gt;
 &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
 &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;db&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;findOne&lt;/span&gt;&lt;span class="p"&gt;({}))&lt;/span&gt;
 &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;document&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
 &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note how in the promise code there is one error handling case as opposed to the two in the callback case. The ability to &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Using_promises#chaining" rel="noopener noreferrer"&gt;chain promises&lt;/a&gt; allows us to treat many async operations as one, the &lt;code&gt;catch&lt;/code&gt; handler would be called if either the &lt;code&gt;connect&lt;/code&gt; or the &lt;code&gt;find&lt;/code&gt; methods were to throw an error. This chaining is convenient, but when writing JavaScript today we do even better by using special syntax for handling promises.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enter &lt;code&gt;async&lt;/code&gt;/&lt;code&gt;await&lt;/code&gt; 🔁
&lt;/h3&gt;

&lt;p&gt;Mid-2017 JavaScript engines shipped support for &lt;code&gt;async&lt;/code&gt;/&lt;code&gt;await&lt;/code&gt; syntax allowing programmers to write asynchronous operations in a familiar procedural format. Using &lt;code&gt;async&lt;/code&gt;/&lt;code&gt;await&lt;/code&gt; allows the programmer to encode their logical asynchronous dependencies right into the syntax of the language.&lt;/p&gt;

&lt;p&gt;Let's return to our user input example, as we can now "await" the input which keeps &lt;code&gt;main&lt;/code&gt; as the active procedure that began the task to read from standard in.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"For &lt;code&gt;await&lt;/code&gt; the suspend and resume points coincide and so we not only know where we would continue, but by coincidence, we also know where we came from."&lt;/p&gt;

&lt;p&gt;source: &lt;a href="https://docs.google.com/document/d/13Sy_kBIJGP0XT34V1CV3nkWya4TwYx9L3Yv45LdGB6Q/edit#heading=h.e6lcalo0cl47" rel="noopener noreferrer"&gt;Zero-cost async stack traces&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When the input is available, &lt;code&gt;readStdin&lt;/code&gt; will resolve and we can continue with our parsing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;readStdin&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;parseString&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;file://addNumbers.mjs:43
    throw new Error('cannot get string length')
          ^
Error: cannot get string length
    at stringLength (file://addNumbers.mjs:43:11)
    at splitString (file://addNumbers.mjs:25:17)
    at parseString (file://addNumbers.mjs:19:19)
    at main (file://addNumbers.mjs:9:5)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async file://addNumbers.mjs:62:1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the JavaScript engine reaches the "await", &lt;code&gt;main&lt;/code&gt; is suspended. The engine is free to handle other tasks while the read is waiting for our user to type. We can now encode into the syntax of the function that it will suspend until some other task completes, and when it continues it maintains the context of everything that was in scope when it started.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nb"&gt;document&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;db&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;findOne&lt;/span&gt;&lt;span class="p"&gt;({});&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;"The fundamental difference between &lt;code&gt;await&lt;/code&gt; and manually constructed promises is that &lt;code&gt;await X()&lt;/code&gt; &lt;strong&gt;suspends&lt;/strong&gt; execution of the current function, while &lt;code&gt;promise.then(X)&lt;/code&gt; will &lt;strong&gt;continue&lt;/strong&gt; execution of the current function after adding the &lt;code&gt;X&lt;/code&gt; call to the callback chain. In the context of stack traces, this difference is pretty significant."&lt;/p&gt;

&lt;p&gt;source: &lt;a href="https://mathiasbynens.be/notes/async-stack-traces" rel="noopener noreferrer"&gt;Why await beats Promise#then() · Mathias Bynens&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Sample Stack Traces
&lt;/h2&gt;

&lt;p&gt;Prior to completing the &lt;code&gt;async&lt;/code&gt;/&lt;code&gt;await&lt;/code&gt; conversion down to the internal network layer of the driver, our error stack would begin at the point of converting a server's error message into a JavaScript, such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MongoServerError: Failing command via 'failCommand' failpoint
    at Connection.onMessage (./mongodb/lib/cmap/connection.js:231:30)
    at MessageStream.&amp;lt;anonymous&amp;gt; (./mongodb/lib/cmap/connection.js:61:60)
    at MessageStream.emit (node:events:520:28)
    at processIncomingData (./mongodb/lib/cmap/message_stream.js:125:16)
    at MessageStream._write (./mongodb/lib/cmap/message_stream.js:33:9)
    at writeOrBuffer (node:internal/streams/writable:564:12)
    at _write (node:internal/streams/writable:493:10)
    at Writable.write (node:internal/streams/writable:502:10)
    at Socket.ondata (node:internal/streams/readable:1007:22)
    at Socket.emit (node:events:520:28)
                    ^-- Sadness, that's not my code...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, post v6.5.0, the stack trace points directly back to the origination of an operation (we see you &lt;code&gt;main.js&lt;/code&gt;!):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MongoServerError: Failing command via 'failCommand' failpoint
    at Connection.sendCommand (./mongodb/lib/cmap/connection.js:290:27)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Connection.command (./mongodb/lib/cmap/connection.js:313:26)
    at async Server.command (./mongodb/lib/sdam/server.js:167:29)
    at async FindOperation.execute (./mongodb/lib/operations/find.js:34:16)
    at async tryOperation (./mongodb/lib/operations/execute_operation.js:192:20)
    at async executeOperation (./mongodb/lib/operations/execute_operation.js:69:16)
    at async FindCursor._initialize (./mongodb/lib/cursor/find_cursor.js:51:26)
    at async FindCursor.cursorInit (./mongodb/lib/cursor/abstract_cursor.js:471:27)
    at async FindCursor.fetchBatch (./mongodb/lib/cursor/abstract_cursor.js:503:13)
    at async FindCursor.next (./mongodb/lib/cursor/abstract_cursor.js:228:13)
    at async Collection.findOne (./mongodb/lib/collection.js:274:21)
    at async main (./mongodb/main.js:19:3)
                   ^-- Yay, that's my code!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Additional Resources&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.google.com/document/d/13Sy_kBIJGP0XT34V1CV3nkWya4TwYx9L3Yv45LdGB6Q/edit" rel="noopener noreferrer"&gt;Zero-cost async stack traces&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tc39/proposal-error-stacks" rel="noopener noreferrer"&gt;tc39/proposal-error-stacks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://v8.dev/docs/stack-trace-api" rel="noopener noreferrer"&gt;Stack trace API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://v8.dev/blog/modern-javascript#proper-tail-calls" rel="noopener noreferrer"&gt;ES2015, ES2016, and beyond · Tail Calls · V8&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://v8.dev/blog/fast-async" rel="noopener noreferrer"&gt;Faster async functions and promises · V8&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://mathiasbynens.be/notes/async-stack-traces" rel="noopener noreferrer"&gt;Asynchronous stack traces: why await beats Promise#then() · Mathias Bynens&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>typescript</category>
      <category>node</category>
      <category>mongodb</category>
    </item>
    <item>
      <title>Peeling the MongoDB Drivers Onion</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Tue, 21 May 2024 18:59:35 +0000</pubDate>
      <link>https://dev.to/alexbevi/peeling-the-mongodb-drivers-onion-2gma</link>
      <guid>https://dev.to/alexbevi/peeling-the-mongodb-drivers-onion-2gma</guid>
      <description>&lt;p&gt;The modern MongoDB driver consists of a number of components, each of which are thoroughly documented in the &lt;a href="https://github.com/mongodb/specifications"&gt;Specifications&lt;/a&gt; repository. Though this information is readily available and extremely helpful, what it lacks is a high level overview to tie the specs together into a cohesive picture of what a MongoDB driver is.&lt;/p&gt;

&lt;p&gt;Architecturally an implicit hierarchy exists within the drivers, so expressing drivers in terms of an &lt;a href="https://en.wikipedia.org/wiki/Onion_model"&gt;onion model&lt;/a&gt; feels appropriate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layers of the Onion
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7cfa2qi7021e90nedmnn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7cfa2qi7021e90nedmnn.png" alt="Image description" width="786" height="836"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;"drivers onion"&lt;/em&gt; is meant to represent how various concepts, components and APIs can be layered atop each other to build a MongoDB driver from the ground up, or to help understand how existing drivers have been structured. Hopefully this representation of MongoDB’s drivers helps provide some clarity, as the complexity of these libraries - like the onion above - could otherwise bring you to tears.&lt;/p&gt;

&lt;h3&gt;
  
  
  Serialization
&lt;/h3&gt;

&lt;p&gt;At their lowest level all MongoDB drivers will need to know how to work with &lt;a href="https://bsonspec.org/"&gt;BSON&lt;/a&gt;. BSON (short for "Binary JSON") is a bin­ary-en­coded serialization of &lt;a href="https://www.json.org/json-en.html"&gt;JSON&lt;/a&gt;-like documents, and like JSON, it sup­ports the nesting of arrays and documents. BSON also contains extensions that al­low representation of data types that are not part of the &lt;a href="https://datatracker.ietf.org/doc/html/rfc7159"&gt;JSON spec&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Specifications:&lt;/strong&gt; &lt;a href="https://bsonspec.org/spec.html"&gt;BSON&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/objectid.rst"&gt;ObjectId&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/bson-decimal128/decimal128.md"&gt;Decimal128&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/uuid.rst"&gt;UUID&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/dbref.md"&gt;DBRef&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/extended-json.rst"&gt;Extended JSON&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Communication
&lt;/h3&gt;

&lt;p&gt;Once BSON documents can be created and manipulated, the foundation for interacting with a MongoDB host process has been laid. Drivers communicate by sending &lt;a href="https://www.mongodb.com/docs/manual/reference/command/"&gt;database commands&lt;/a&gt; as serialized BSON documents using MongoDB’s &lt;a href="https://www.mongodb.com/docs/manual/reference/mongodb-wire-protocol/"&gt;wire protocol&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;From the provided connection string and options a socket connection is established to a host, which an initial handshake verifies is in fact a valid MongoDB connection by sending a simple &lt;a href="https://www.mongodb.com/docs/manual/reference/command/hello/"&gt;&lt;code&gt;hello&lt;/code&gt;&lt;/a&gt;. Based on the response to this first command a driver can continue to establish and authenticate connections.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Specifications:&lt;/strong&gt; &lt;a href="https://github.com/mongodb/specifications/blob/master/source/message/OP_MSG.md"&gt;&lt;code&gt;OP_MSG&lt;/code&gt;&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/run-command/run-command.rst"&gt;Command Execution&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/connection-string/connection-string-spec.md"&gt;Connection String&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/uri-options/uri-options.md"&gt;URI Options&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/ocsp-support/ocsp-support.rst"&gt;OCSP&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/mongodb-handshake/handshake.rst"&gt;Initial Handshake&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/compression/OP_COMPRESSED.md"&gt;Wire Compression&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/socks5-support/socks5.rst"&gt;SOCKS5&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/initial-dns-seedlist-discovery/initial-dns-seedlist-discovery.md"&gt;Initial DNS Seedlist Discovery&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Connectivity
&lt;/h3&gt;

&lt;p&gt;Now that a valid host has been found, the cluster’s topology can be discovered and monitoring connections can be established. Connection pools can then be created and populated with connections. The monitoring connections will subsequently be used for ensuring operations are routed to available hosts, or hosts that meet certain criteria (such as a configured &lt;a href="https://www.mongodb.com/docs/upcoming/core/read-preference/"&gt;read preference&lt;/a&gt; or acceptable latency window).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Specifications:&lt;/strong&gt; &lt;a href="https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst"&gt;SDAM&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/connection-monitoring-and-pooling/connection-monitoring-and-pooling.md"&gt;CMAP&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/load-balancers/load-balancers.md"&gt;Load Balancer Support&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Authentication
&lt;/h3&gt;

&lt;p&gt;Establishing and monitoring connections to MongoDB ensures they’re available, but MongoDB server processes typically will require the connection to be &lt;a href="https://www.mongodb.com/docs/manual/core/authentication/"&gt;authenticated&lt;/a&gt; before commands will be accepted. MongoDB offers many authentication mechanisms such as &lt;a href="https://www.mongodb.com/docs/manual/core/security-scram"&gt;SCRAM&lt;/a&gt;, &lt;a href="https://www.mongodb.com/docs/manual/core/security-x.509/"&gt;x.509&lt;/a&gt;, &lt;a href="https://www.mongodb.com/docs/manual/core/kerberos/"&gt;Kerberos&lt;/a&gt;, &lt;a href="https://www.mongodb.com/docs/manual/core/security-ldap/"&gt;LDAP&lt;/a&gt;, &lt;a href="https://www.mongodb.com/docs/manual/core/security-oidc/"&gt;OpenID Connect&lt;/a&gt; and &lt;a href="https://www.mongodb.com/docs/atlas/security/passwordless-authentication/"&gt;AWS IAM&lt;/a&gt;, which MongoDB drivers support using the &lt;em&gt;&lt;a href="https://www.ietf.org/rfc/rfc4422.txt"&gt;Simple Authentication and Security Layer&lt;/a&gt;&lt;/em&gt; (SASL) framework.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Specifications:&lt;/strong&gt; &lt;a href="https://github.com/mongodb/specifications/blob/master/source/auth/auth.md"&gt;Authentication&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Availability
&lt;/h3&gt;

&lt;p&gt;All client operations will be serialized as BSON and sent to MongoDB over a connection that will first be checked out of a connection pool. Various monitoring processes exist to ensure a driver’s internal state machine contains an accurate view of the cluster’s topology so that read and write requests can always be appropriately routed according to MongoDB’s &lt;a href="https://www.mongodb.com/docs/manual/core/read-preference-mechanics/"&gt;server selection algorithm&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Specifications:&lt;/strong&gt; &lt;a href="https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-monitoring.md"&gt;Server Monitoring&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/polling-srv-records-for-mongos-discovery/polling-srv-records-for-mongos-discovery.rst"&gt;&lt;code&gt;SRV&lt;/code&gt; Polling for mongos Discovery&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.md"&gt;Server Selection&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/max-staleness/max-staleness.md"&gt;Max Staleness&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Resilience
&lt;/h3&gt;

&lt;p&gt;At their core, database drivers are client libraries meant to facilitate interactions between an application and the database. MongoDB’s drivers are no different in that regard, as they abstract away the underlying serialization, communication, connectivity, and availability functions required to programmatically interact with your data.&lt;/p&gt;

&lt;p&gt;To further enhance the developer experience while working with MongoDB, various resilience features can be added based on &lt;a href="https://www.mongodb.com/docs/manual/reference/server-sessions/"&gt;logical sessions&lt;/a&gt; such as &lt;a href="https://www.mongodb.com/docs/manual/core/retryable-writes"&gt;retryable writes&lt;/a&gt;, &lt;a href="https://www.mongodb.com/docs/manual/core/read-isolation-consistency-recency/#std-label-causal-consistency"&gt;causal consistency&lt;/a&gt;, and &lt;a href="https://www.mongodb.com/docs/manual/core/transactions/"&gt;transactions&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Specifications:&lt;/strong&gt; Retryability (&lt;a href="https://github.com/mongodb/specifications/blob/master/source/retryable-reads/retryable-reads.md"&gt;Reads&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/retryable-writes/retryable-writes.md"&gt;Writes&lt;/a&gt;), &lt;a href="https://github.com/mongodb/specifications/blob/master/source/client-side-operations-timeout/client-side-operations-timeout.md"&gt;CSOT&lt;/a&gt;, Consistency (&lt;a href="https://github.com/mongodb/specifications/blob/master/source/sessions/driver-sessions.md"&gt;Sessions&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/causal-consistency/causal-consistency.md"&gt;Causal Consistency&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/sessions/snapshot-sessions.md"&gt;Snapshot Reads&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/transactions/transactions.md"&gt;Transactions&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/transactions-convenient-api/transactions-convenient-api.rst"&gt;Convenient Transactions API&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Programmability
&lt;/h3&gt;

&lt;p&gt;Now that we can serialize commands and send them over the wire through an authenticated connection we can begin actually manipulating data. Since all database interactions are in the form of commands, if we wanted to remove a single document we might issue a &lt;a href="https://www.mongodb.com/docs/manual/reference/command/delete"&gt;&lt;code&gt;delete&lt;/code&gt; command&lt;/a&gt; such as the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;runCommand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="na"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;orders&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="na"&gt;deletes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;q&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;D&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="na"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Though not exceedingly complex, a better developer experience can be achieved through more single-purpose APIs. This would allow the above example to be expressed as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;deleteMany&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;D&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To provide a cleaner and clearer developer experience, many specifications exist to describe how these APIs should be consistently presented across driver implementations, while still providing the flexibility to make APIs more idiomatic for each language.&lt;/p&gt;

&lt;p&gt;Advanced security features such as &lt;a href="https://www.mongodb.com/docs/manual/core/csfle/"&gt;client-side field level encryption&lt;/a&gt; are also defined at this layer.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Specifications:&lt;/strong&gt; Resource Management (&lt;a href="https://github.com/mongodb/specifications/blob/master/source/enumerate-databases.rst"&gt;Databases&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/enumerate-collections.rst"&gt;Collections&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/index-management/index-management.md"&gt;Indexes&lt;/a&gt;), Data Management (&lt;a href="https://github.com/mongodb/specifications/blob/master/source/crud/crud.md"&gt;CRUD&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/collation/collation.md"&gt;Collation&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/server_write_commands.rst"&gt;Write Commands&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/driver-bulk-update.rst"&gt;Bulk API&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/crud/bulk-write.md"&gt;Bulk Write&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/read-write-concern/read-write-concern.rst"&gt;R/W Concern&lt;/a&gt;), Cursors (&lt;a href="https://github.com/mongodb/specifications/blob/master/source/change-streams/change-streams.md"&gt;Change Streams&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/find_getmore_killcursors_commands.rst"&gt;&lt;code&gt;find&lt;/code&gt;/&lt;code&gt;getMore&lt;/code&gt;/&lt;code&gt;killCursors&lt;/code&gt;&lt;/a&gt;), &lt;a href="https://github.com/mongodb/specifications/blob/master/source/gridfs/gridfs-spec.md"&gt;GridFS&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/versioned-api/versioned-api.rst"&gt;Stable API&lt;/a&gt;, Security (&lt;a href="https://github.com/mongodb/specifications/blob/master/source/client-side-encryption/client-side-encryption.md"&gt;Client Side Encryption&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/client-side-encryption/subtype6.md"&gt;BSON Binary Subtype 6&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Observability
&lt;/h3&gt;

&lt;p&gt;With database commands being serialized and sent to MongoDB servers and responses being received and deserialized, our driver can be considered fully functional for most read and write operations. As MongoDB drivers abstract away most of the complexity involved with creating and maintaining the connections these commands will be sent over, providing mechanisms for introspection into a driver’s functionality can provide developers with added confidence that things are working as expected.&lt;/p&gt;

&lt;p&gt;The inner workings of connection pools, connection lifecycle, server monitoring, topology changes, command execution and other driver components are exposed by means of events developers can register listeners to capture. This can be an invaluable troubleshooting tool and can help facilitate monitoring the health of an application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;BSON&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;EJSON&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mongodb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;debugPrint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;EJSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mongodb://localhost:27017&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;monitorCommands&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
 &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;commandStarted&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;debugPrint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;commandStarted&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
 &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;connectionCheckedOut&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;debugPrint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;connectionCheckedOut&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
 &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;coll&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;db&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;foo&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;coll&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;findOne&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
 &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Given the example above (using the &lt;a href="https://www.mongodb.com/docs/drivers/node/current/"&gt;Node.js driver&lt;/a&gt;) the specified connection events and command events would be logged as they’re emitted by the driver:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;connectionCheckedOut: {"time":{"$date":"2024-05-17T15:18:18.589Z"},"address":"localhost:27018","name":"connectionCheckedOut","connectionId":1}&lt;/code&gt;&lt;br&gt;&lt;br&gt;
&lt;code&gt;commandStarted: {"name":"commandStarted","address":"127.0.0.1:27018","connectionId":1,"serviceId":null,"requestId":5,"databaseName":"test","commandName":"find","command":{"find":"foo","filter":{},"limit":1,"singleBatch":true,"batchSize":1,"lsid":{"id":{"$binary":{"base64":"4B1kOPCGRUe/641MKhGT4Q==","subType":"04"}}},"$clusterTime":{"clusterTime":{"$timestamp":{"t":1715959097,"i":1}},"signature":{"hash":{"$binary":"base64":"AAAAAAAAAAAAAAAAAAAAAAAAAAA=","subType":"00"}},"keyId":0}},"$db":"test"},"serverConnectionId":140}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The preferred method of observing internal behavior would be through &lt;a href="https://github.com/mongodb/specifications/blob/master/source/logging/logging.md"&gt;standardized logging&lt;/a&gt; once it is available in all drivers (&lt;a href="https://jira.mongodb.org/browse/DRIVERS-1204"&gt;DRIVERS-1204&lt;/a&gt;), however until that time only event logging is consistently available. In the future additional observability tooling such as &lt;a href="https://opentelemetry.io/"&gt;Open Telemetry&lt;/a&gt; support may also be introduced.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Specifications:&lt;/strong&gt; &lt;a href="https://github.com/mongodb/specifications/blob/master/source/command-logging-and-monitoring/command-logging-and-monitoring.rst"&gt;Command Logging and Monitoring&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring-logging-and-monitoring.md"&gt;SDAM Logging and Monitoring&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/logging/logging.md"&gt;Standardized Logging&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/connection-monitoring-and-pooling/connection-monitoring-and-pooling.md#connection-pool-logging"&gt;Connection Pool Logging&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Testability
&lt;/h3&gt;

&lt;p&gt;Ensuring existing as well as net-new drivers can be effectively tested for correctness and performance, most specifications define a standard set of tests using &lt;a href="https://web.archive.org/web/20230930061614/https://www.mongodb.com/blog/post/cat-herds-crook-yaml-test-specs-improve-driver-conformance"&gt;YAML tests to improve driver conformance&lt;/a&gt;. This allows specification authors and maintainers to describe functionality once with the confidence that the tests can be executed alike by language-specific test runners across all drivers.&lt;/p&gt;

&lt;p&gt;Though the unified test format greatly simplifies language-specific implementations, not all tests can be represented in this fashion. In those cases the specifications may describe tests to be manually implemented as prose. By limiting the number of prose tests that each driver must implement, engineers can deliver functionality with greater confidence while also minimizing the burden of upstream verification.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Specifications:&lt;/strong&gt; &lt;a href="https://github.com/mongodb/specifications/blob/master/source/unified-test-format/unified-test-format.md"&gt;Unified Test Format&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/tree/master/source/atlas-data-lake-testing/tests"&gt;Atlas Data Federation Testing&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/benchmarking/benchmarking.md"&gt;Performance Benchmarking&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/bson-corpus/bson-corpus.md"&gt;BSON Corpus&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/tree/master/source/connections-survive-step-down/tests"&gt;Replication Event Resilience&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/faas-automated-testing/faas-automated-testing.md"&gt;FAAS Automated Testing&lt;/a&gt;, &lt;a href="https://github.com/mongodb/specifications/blob/master/source/serverless-testing/README.rst"&gt;Atlas Serverless Testing&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Most (if not all) the information required to build a new driver or maintain existing drivers technically exists within the specifications, however without a mental mode of their composition and architecture it can be extremely challenging to know where to look.&lt;/p&gt;

&lt;p&gt;Peeling the &lt;em&gt;"drivers onion"&lt;/em&gt; should hopefully make reasoning about them a little easier, especially with the understanding that everything can be tested to validate individual implementations are "up to spec".&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>drivers</category>
      <category>architecture</category>
      <category>javascript</category>
    </item>
    <item>
      <title>MongoDB and Load Balancer Support</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Fri, 15 Mar 2024 01:53:45 +0000</pubDate>
      <link>https://dev.to/alexbevi/mongodb-and-load-balancer-support-1p28</link>
      <guid>https://dev.to/alexbevi/mongodb-and-load-balancer-support-1p28</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;A load balancer enhances your application's scalability, availability, and performance by efficiently distributing traffic across multiple servers based on a number of &lt;a href="https://kemptechnologies.com/load-balancer/load-balancing-algorithms-techniques"&gt;algorithms and techniques&lt;/a&gt; - but what about your database? MongoDB is a distributed database, but can it be placed behind a load balancer?&lt;/p&gt;

&lt;p&gt;Astute readers of MongoDB's Node.js driver's &lt;a href="https://github.com/mongodb/node-mongodb-native/tree/main/test#load-balanced"&gt;&lt;code&gt;test&lt;/code&gt; README&lt;/a&gt; may have noticed at some point that there is mention of a testing methodology for load balancers, and &lt;a href="https://www.mongodb.com/community/forums/t/load-balancing-mongos/247301"&gt;as some in the community have found&lt;/a&gt; you can find &lt;a href="https://jira.mongodb.org/browse/SERVER-58502"&gt;public &lt;code&gt;SERVER&lt;/code&gt; tickets&lt;/a&gt; that also allude to this functionality existing.&lt;/p&gt;

&lt;p&gt;Digging further you'll find the &lt;a href="https://github.com/mongodb/specifications/blob/master/source/load-balancers/load-balancers.rst"&gt;&lt;em&gt;Load Balancer Support&lt;/em&gt;&lt;/a&gt; specification for MongoDB's drivers which states &lt;em&gt;"To specify to the driver to operate in load balancing mode, a connection string option of &lt;code&gt;loadBalanced=true&lt;/code&gt; MUST be added to the connection string"&lt;/em&gt; ... but how do you actually make that work?&lt;/p&gt;

&lt;p&gt;In this post we're going to explore why MongoDB nodes couldn't previously be placed behind an &lt;a href="https://www.nginx.com/resources/glossary/layer-4-load-balancing/"&gt;L4 load balancer&lt;/a&gt;, and what changed in MongoDB 5.3 that may actually make this possible!&lt;/p&gt;

&lt;h3&gt;
  
  
  Replication
&lt;/h3&gt;

&lt;p&gt;Coordination of data distribution and ensuring high availability is done via &lt;a href="https://www.mongodb.com/docs/manual/replication/"&gt;replication&lt;/a&gt;, which requires the cluster to be aware at all times which node is the &lt;a href="https://www.mongodb.com/docs/manual/core/replica-set-members/#primary"&gt;primary&lt;/a&gt; and which are &lt;a href="https://www.mongodb.com/docs/manual/core/replica-set-members/#secondaries"&gt;secondaries&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As there can only be one primary, any application targeting the cluster will need to be aware of the current topology as well, as trying to write to a secondary will fail:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="nb"&gt;require&lt;/span&gt; &lt;span class="s1"&gt;'mongo'&lt;/span&gt;
&lt;span class="c1"&gt;# connect directly to a secondary host in a local replica set&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Mongo&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'mongodb://localhost:27018/test?directConnection=true'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;collection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:foo&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert_one&lt;/span&gt; &lt;span class="ss"&gt;bar: &lt;/span&gt;&lt;span class="s2"&gt;"baz"&lt;/span&gt;

&lt;span class="c1"&gt;# =&amp;gt; Mongo::Error::OperationFailure: [10107:NotWritablePrimary]: not primary (on localhost:27018, legacy retry, attempt 1)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All official MongoDB drivers implement the &lt;a href="https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst"&gt;&lt;em&gt;Server Discovery and Monitoring&lt;/em&gt;&lt;/a&gt; specification to ensure applications can route requests to the appropriate servers (as outlined in the &lt;a href="https://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.md"&gt;&lt;em&gt;Server Selection&lt;/em&gt;&lt;/a&gt; specification). When you have a single application instance with a single connection pool (as outlined in the &lt;a href="https://github.com/mongodb/specifications/blob/master/source/connection-monitoring-and-pooling/connection-monitoring-and-pooling.md"&gt;&lt;em&gt;Connection Monitoring and Pooling&lt;/em&gt;&lt;/a&gt; specification) the number of connections to the cluster is easy to identify, but application deployment configurations can vary and scale.&lt;/p&gt;

&lt;p&gt;Thanks to MongoDB drivers all consistently providing connection monitoring and pooling functionality, external connection pooling solutions aren't required (ex: &lt;a href="https://www.pgpool.net/mediawiki/index.php/Main_Page"&gt;Pgpool&lt;/a&gt;, &lt;a href="https://www.pgbouncer.org/"&gt;PgBouncer&lt;/a&gt;). This allows applications built using MongoDB drivers to be resilient and scalable out of the box, but based on what we understand regarding &lt;a href="https://alexbevi.com/blog/2023/07/04/how-many-connections-is-my-application-establishing-to-my-mongodb-cluster/"&gt;the number of connections applications establish to MongoDB clusters&lt;/a&gt; it stands to reason that at a certain point as our application deployments increase, so will our connections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I use a load balancer though?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Due to the need for these additional monitoring connections it has been difficult (impossible?) to place a load balancer between applications and a MongoDB replica set - though adventurous users have developed &lt;a href="https://blog.danman.eu/mongodb-haproxy/"&gt;some interesting HAProxy configurations&lt;/a&gt; in the past to try and solve this problem. The problem you'd face is that though read requests can be routed to any available server, write requests &lt;em&gt;must&lt;/em&gt; target the cluster primary.&lt;/p&gt;

&lt;p&gt;For the sake of argument you may ask &lt;em&gt;"what if I had a 100% read workload?"&lt;/em&gt;. In that case you &lt;em&gt;could&lt;/em&gt; put your hosts behind a load balancer, but you'll likely run into issues as soon as you try and iterate a cursor (see &lt;a href="https://www.mongodb.com/docs/manual/reference/command/getMore/"&gt;&lt;code&gt;getMore&lt;/code&gt;&lt;/a&gt;). Operations such as &lt;a href="https://www.mongodb.com/docs/manual/reference/command/find/"&gt;&lt;code&gt;find&lt;/code&gt;&lt;/a&gt; or &lt;a href="https://www.mongodb.com/docs/manual/reference/command/aggregate/"&gt;&lt;code&gt;aggregate&lt;/code&gt;&lt;/a&gt; return a cursor (&lt;code&gt;cursorId&lt;/code&gt;) which only exists on the originating server the command targeted. Attempting to execute a &lt;code&gt;getMore&lt;/code&gt; on the wrong server will result in a &lt;code&gt;CursorNotFound&lt;/code&gt; error being returned, which can be &lt;a href="https://alexbevi.com/blog/2021/12/29/troubleshooting-mongodb-cursor-xxxxxx-not-found-errors/"&gt;challenging to troubleshoot&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sharding
&lt;/h3&gt;

&lt;p&gt;Fortunately, MongoDB already offers a form of "load balancing" for &lt;a href="https://www.mongodb.com/docs/manual/sharding/#sharded-cluster"&gt;sharded clusters&lt;/a&gt; in the form of the &lt;a href="https://www.mongodb.com/docs/manual/core/sharded-cluster-query-router/"&gt;sharded cluster query router&lt;/a&gt; (&lt;code&gt;mongos&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Assuming the cluster is sharded and if there is more than one &lt;code&gt;mongos&lt;/code&gt; instance in the &lt;a href="https://www.mongodb.com/docs/manual/reference/glossary/#std-term-seed-list"&gt;connection seed list&lt;/a&gt;, the driver determines which &lt;code&gt;mongos&lt;/code&gt; is the "closest" (i.e. the member with the lowest average network round-trip-time) and calculates the latency window by adding the average round-trip-time of this "closest" &lt;code&gt;mongos&lt;/code&gt; instance and the &lt;code&gt;localThresholdMS&lt;/code&gt;. The driver will load balance randomly across the &lt;code&gt;mongos&lt;/code&gt; instances that fall within the latency window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Can I use a load balancer though?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Sharding introduces a routing layer between the application and the cluster members, which slightly simplifies how drivers route operations as there is no longer a need to track replica set state. You may think this would make placing a pool of &lt;code&gt;mongos&lt;/code&gt;' behind a load balancer straightforward, but as Craig Wilson describes in a &lt;a href="http://craiggwilson.com/2013/10/21/load-balanced-mongos/"&gt;2013 blog post&lt;/a&gt;, similar issues will still arise when trying to iterate cursors. Note that though Craig's post references the &lt;a href="https://www.mongodb.com/docs/manual/legacy-opcodes/"&gt;legacy opcodes&lt;/a&gt;, the situation would be the same if using newer drivers that leverage &lt;a href="https://www.mongodb.com/docs/manual/reference/mongodb-wire-protocol/#std-label-wire-op-msg"&gt;&lt;code&gt;OP_MSG&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://www.mongodb.com/docs/manual/reference/mongodb-wire-protocol/#std-label-wire-op-compressed"&gt;&lt;code&gt;OP_COMPRESSED&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Note that the &lt;em&gt;Server Selection&lt;/em&gt; specification &lt;a href="https://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.md#cursors"&gt;calls out that&lt;/a&gt; &lt;em&gt;"Cursor operations [...] do not go through the server selection process. Cursor operations must be sent to the original server that received the query [...]"&lt;/em&gt;. As this state information would not be tracked within the load balancer, issues would arise if a cursor operation were attempted and a balancer returned a different server where the cursor didn't exist.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;operationCount&lt;/code&gt;-based Server Selection
&lt;/h3&gt;

&lt;p&gt;As it is a form of "load balancing" it's worth just calling out that in an effort to alleviate runaway connection creation scenarios ("connection storms") the drivers &lt;a href="https://github.com/mongodb/specifications/blob/master/source/server-selection/server-selection.md#operationcount-based-selection-within-the-latency-window-multi-threaded-or-async"&gt;approximate an individual server's load&lt;/a&gt; by tracking the number of concurrent operations that node is processing (&lt;code&gt;operationCount&lt;/code&gt;) and then routing operations to servers with less load. This should reduce the number of new operations routed towards nodes that are busier and thus increase the number routed towards nodes that are servicing operations faster or are simply less busy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Load Balancer Support
&lt;/h2&gt;

&lt;p&gt;When you see a ticket called &lt;a href="https://jira.mongodb.org/browse/SERVER-58207"&gt;"Enable Feature flag for Support for Deploying MongoDB behind a L4 Load Balancer"&lt;/a&gt; closed out as fixed for MongoDB&lt;br&gt;
6.0.0-rc0 and 5.3.0-rc3 it's hard not to get excited - but what does this mean? After doing a bit of digging you'll find that &lt;code&gt;mongos&lt;/code&gt;' now support a proxy protocol which is configured via the &lt;a href="https://github.com/mongodb/mongo/blob/r7.0.6/src/mongo/s/mongos_server_parameters.idl#L66-L74"&gt;&lt;code&gt;loadBalancerPort&lt;/code&gt;&lt;/a&gt; startup parameter.&lt;/p&gt;

&lt;p&gt;Given that there's a driver specification, driver implementations (such as for the &lt;a href="https://jira.mongodb.org/browse/NODE-3011"&gt;Node.js driver&lt;/a&gt; and &lt;a href="https://jira.mongodb.org/browse/RUBY-2515"&gt;Ruby driver&lt;/a&gt;) and server support it should be possible to configure a sharded cluster to utilize the proxy protocol.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Before we proceed it's worth calling out that this is not considered an officially supported configuration. Until MongoDB's server team promotes this as a valid production configuration it should be considered experimental if used with a self-managed deployment.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Configuration
&lt;/h3&gt;

&lt;p&gt;For our test we'll be configuring a single-shard sharded cluster with 5 &lt;code&gt;mongos&lt;/code&gt;' behind an &lt;a href="https://www.haproxy.org/"&gt;HAProxy&lt;/a&gt; load balancer. Assuming you're already familiar with &lt;a href="https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts"&gt;HAProxy and load balancing concepts&lt;/a&gt;, we'll be setting up a &lt;a href="https://www.haproxy.com/documentation/haproxy-configuration-tutorials/load-balancing/tcp/#enable-tcp-mode"&gt;TCP proxy&lt;/a&gt; to perform &lt;code&gt;roundrobin&lt;/code&gt; balancing.&lt;/p&gt;
&lt;h4&gt;
  
  
  1. Setup a local sharded cluster
&lt;/h4&gt;

&lt;p&gt;First we need a local sharded cluster, which we'll provision using &lt;a href="https://github.com/aheckmann/m"&gt;&lt;code&gt;m&lt;/code&gt; - the MongoDB version manager&lt;/a&gt; and &lt;a href="https://github.com/rueckstiess/mtools"&gt;&lt;code&gt;mtools&lt;/code&gt;&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;m 7.0.6-ent
mlaunch init &lt;span class="nt"&gt;--replicaset&lt;/span&gt; &lt;span class="nt"&gt;--nodes&lt;/span&gt; 3 &lt;span class="nt"&gt;--shards&lt;/span&gt; 1 &lt;span class="nt"&gt;--csrs&lt;/span&gt; &lt;span class="nt"&gt;--mongos&lt;/span&gt; 5 &lt;span class="nt"&gt;--binarypath&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;m bin 7.0.6-ent&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;--bind_ip_all&lt;/span&gt;
mlaunch stop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration will yield a single &lt;a href="https://www.mongodb.com/docs/manual/core/sharded-cluster-shards/"&gt;shard&lt;/a&gt; with 3 nodes, a &lt;a href="https://www.mongodb.com/docs/manual/core/sharded-cluster-config-servers/#replica-set-config-servers"&gt;config server replica set&lt;/a&gt; and 5 &lt;code&gt;mongos&lt;/code&gt;'. Once started, we immediately stop the cluster as some additional (manual) configuration is required.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Update the cluster configuration to enable proxy protocol
&lt;/h4&gt;

&lt;p&gt;Since we need to modify the startup parameters for our &lt;code&gt;mongos&lt;/code&gt;' we'll update the configuration file that &lt;code&gt;mlaunch&lt;/code&gt; (part of &lt;code&gt;mtools&lt;/code&gt;) uses.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="s1"&gt;'s/ --port 27017 / --port 27017 --setParameter loadBalancerPort=37017 /g'&lt;/span&gt; data/.mlaunch_startup
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="s1"&gt;'s/ --port 27018 / --port 27018 --setParameter loadBalancerPort=37018 /g'&lt;/span&gt; data/.mlaunch_startup
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="s1"&gt;'s/ --port 27019 / --port 27019 --setParameter loadBalancerPort=37019 /g'&lt;/span&gt; data/.mlaunch_startup
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="s1"&gt;'s/ --port 27020 / --port 27020 --setParameter loadBalancerPort=37020 /g'&lt;/span&gt; data/.mlaunch_startup
&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt; &lt;span class="s1"&gt;'s/ --port 27021 / --port 27021 --setParameter loadBalancerPort=37021 /g'&lt;/span&gt; data/.mlaunch_startup
mlaunch start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above commands just append a &lt;a href="https://www.mongodb.com/docs/manual/reference/command/setParameter/"&gt;&lt;code&gt;setParameter&lt;/code&gt;&lt;/a&gt; call as a command line option so we can configure the &lt;code&gt;loadBalancerPort&lt;/code&gt; parameter of each &lt;code&gt;mongos&lt;/code&gt;. Once completed we restart the cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Configure HAproxy
&lt;/h4&gt;

&lt;p&gt;As we're using HAproxy for our test we'll to build out our custom configuration. The example below will write to a &lt;code&gt;mongodb-lb.conf&lt;/code&gt; file, which will then be read by &lt;code&gt;haproxy&lt;/code&gt; to create our load balanced endpoint. I'm not going to go into detail as to what all the options below mean, but if you want to investigate further see &lt;a href="https://www.haproxy.com/documentation/haproxy-configuration-manual/latest/"&gt;HAproxy's configuration manual&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tee &lt;/span&gt;mongodb-lb.conf &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOT&lt;/span&gt;&lt;span class="sh"&gt;
global
  log stdout local0 debug
  maxconn 4096

defaults
  log global
  mode tcp
  timeout connect  5000ms
  timeout client  30000ms
  timeout server  30000ms
  retries 3

default-server on-error fastinter error-limit 3 inter 3000ms fastinter 1000ms downinter 300s fall 3

frontend stats
    mode http
    bind *:8404
    stats enable
    stats uri /stats
    stats refresh 10s
    stats admin if LOCALHOST

listen mongos
  bind      *:37000
  option    tcplog
  balance   roundrobin
  server    mongos01 *:37017 check send-proxy-v2
  server    mongos02 *:37018 check send-proxy-v2
  server    mongos03 *:37019 check send-proxy-v2
  server    mongos04 *:37020 check send-proxy-v2
  server    mongos05 *:37021 check send-proxy-v2
&lt;/span&gt;&lt;span class="no"&gt;EOT

&lt;/span&gt;haproxy &lt;span class="nt"&gt;-f&lt;/span&gt; mongodb-lb.conf &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; haproxy.log 2&amp;gt;&amp;amp;1 &amp;amp;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To make monitoring a little easier you'll notice we've enabled &lt;a href="https://www.haproxy.com/blog/exploring-the-haproxy-stats-page"&gt;HAProxy's stats frontend&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Test application connectivity through the load balancer
&lt;/h4&gt;

&lt;p&gt;Since the &lt;a href="https://www.mongodb.com/docs/mongodb-shell/"&gt;MongoDB Shell&lt;/a&gt; uses the &lt;a href="https://www.mongodb.com/docs/drivers/node/current/"&gt;Node.js driver&lt;/a&gt; internally we can use to validate if our load balancer is configured properly. We've setup HAProxy to listen on port 37000, so we should be able to connect to that directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mongosh &lt;span class="nt"&gt;--quiet&lt;/span&gt; &lt;span class="s2"&gt;"mongodb://localhost:37000/test"&lt;/span&gt;
MongoServerSelectionError: The server is being accessed through a load balancer, but this driver does not have load balancing enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Seems the driver knows we're trying to connect to a load balancer, but we're missing an option. This is where the &lt;code&gt;loadBalanced=true&lt;/code&gt; option comes into play. Appending this to our connection string will allow us to run an arbitrary workload successfully:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mongosh &lt;span class="nt"&gt;--quiet&lt;/span&gt; &lt;span class="s2"&gt;"mongodb://localhost:37000/test?loadBalanced=true"&lt;/span&gt; &lt;span class="nt"&gt;--eval&lt;/span&gt; &lt;span class="s2"&gt;"while(true) { result = db.foo.insertOne({ d: new Date() }); print(result); sleep(500); }"&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  acknowledged: &lt;span class="nb"&gt;true&lt;/span&gt;,
  insertedId: ObjectId&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'65eb13b122c34af3037c094d'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  acknowledged: &lt;span class="nb"&gt;true&lt;/span&gt;,
  insertedId: ObjectId&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'65eb13b222c34af3037c094e'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;{&lt;/span&gt;
  acknowledged: &lt;span class="nb"&gt;true&lt;/span&gt;,
  insertedId: ObjectId&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'65eb13b222c34af3037c094f'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success! It is worth noting though that this configuration works for us locally as we have direct control of the &lt;code&gt;mongos&lt;/code&gt; processes startup parameters.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you have a MongoDB Atlas sharded cluster, the &lt;code&gt;mongos&lt;/code&gt;' cannot manually be placed behind a load balancer as startup parameter configuration access is not available!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Now that we can successfully connect to our load balanced endpoint it's worth doing a little chaos testing to see how workloads react. The script I shared previously just loops infinitely inserting documents into a collection - but what happens if we kill one or two &lt;code&gt;mongos&lt;/code&gt; processes?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mlaunch stop 27017
mlaunch stop 27019
mlaunch list
Detected mongod version: 7.0.6

PROCESS          PORT     STATUS     PID

mongos           27017    down       -
mongos           27018    running    28006
mongos           27019    down       -
mongos           27020    running    28013
mongos           27021    running    28016

config server    27025    running    27979

shard01
    mongod       27022    running    27994
    mongod       27023    running    27998
    mongod       27024    running    27991
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using &lt;code&gt;mlaunch&lt;/code&gt; I just stopped two of the query routers and waited for a while. The inserts kept on - inserting - so I guess we can consider that a successful test. Note that this is obviously not extensive and should not be taken as a guarantee of any sort, but if this is a configuration that interests you give it a shot and let me know what you find.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6teromsut55razz0gty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6teromsut55razz0gty.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don't forget that you have a web-based stats UI configured that you can refer to 😉.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>networking</category>
      <category>performance</category>
      <category>database</category>
    </item>
    <item>
      <title>Understanding client library and database preferences for JS/TS developers</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Tue, 12 Mar 2024 20:08:04 +0000</pubDate>
      <link>https://dev.to/alexbevi/understanding-client-library-and-database-preferences-for-jsts-developers-j9o</link>
      <guid>https://dev.to/alexbevi/understanding-client-library-and-database-preferences-for-jsts-developers-j9o</guid>
      <description>&lt;p&gt;I'm a Product Manager focusing on Developer Interfaces and am looking to better understand what client libraries and databases JavaScript/TypeScript developers currently prefer. If you're a JavaScript/TypeScript developer and have 5 minutes to spare I'd love to hear from you!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLSd02AdPauVVWpzeObBgr0UoQ9MlxMCHRrTKhTQX5oLn5CDDiw/viewform?usp=pp_url&amp;amp;entry.237887320=Dev.to"&gt;Click here for the survey&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since everyone's time is valuable, if you complete this survey your email address will be entered into a draw to win a $50 Amazon gift card!&lt;/p&gt;

&lt;p&gt;I'll have the survey open until March 22nd, 2024. Once the results are processed I'll share my findings here in case others find it useful.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>survey</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Node.js Driver failing to connect due to "unsafe legacy renegotiation disabled"</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Tue, 05 Mar 2024 12:54:32 +0000</pubDate>
      <link>https://dev.to/alexbevi/nodejs-driver-failing-to-connect-due-to-unsafe-legacy-renegotiation-disabled-34ca</link>
      <guid>https://dev.to/alexbevi/nodejs-driver-failing-to-connect-due-to-unsafe-legacy-renegotiation-disabled-34ca</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://www.mongodb.com/community/forums/t/mongoserverselectionerror-c83200000a000152-ssl-routinesunsafe-legacy-renegotiation-disabled/262568"&gt;this community forum post&lt;/a&gt; there was a report of the &lt;a href="https://www.mongodb.com/docs/drivers/node/current/"&gt;MongoDB Node.js driver&lt;/a&gt; failing to connect with the following error:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;MongoServerSelectionError: C8320000:error:0A000152:SSL routines:final_renegotiate:unsafe legacy renegotiation disabled:c:\ws\deps\openssl\openssl\ssl\statem\extensions.c:922&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This error doesn't smell like a MongoDB-specific error, so digging into &lt;em&gt;"&lt;code&gt;final_renegotiate:unsafe legacy renegotiation disabled&lt;/code&gt;"&lt;/em&gt; specifically lead to &lt;a href="https://github.com/openssl/openssl/issues/21296"&gt;this &lt;code&gt;openssl&lt;/code&gt; issue&lt;/a&gt; that looks to elaborate on the meaning of the error message:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;TLSv1.2 (and earlier) support the concept of renegotiation. In 2009 (i.e. after the TLSv1.2 RFC was published), a flaw was discovered with how renegotiation works that could lead to an attack. After the attack was discovered a fix was deployed to all TLS libraries. In order for the fixed version of renegotiation to work both the client and the server need to support it.&lt;/p&gt;

&lt;p&gt;The original (unfixed) version of renegotiation is known as "unsafe legacy renegotiation" in OpenSSL. The fixed version is known as "secure renegotiation". So either a peer does not have the fix, in which case it will be using &lt;em&gt;"unsafe legacy renegotiation"&lt;/em&gt;, or it does have the fix in which case it will be using &lt;em&gt;"secure renegotiation"&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So it seems that the error originated from OpenSSL, and "that flaw" they're alluding to was likely &lt;a href="https://nvd.nist.gov/vuln/detail/cve-2009-3555"&gt;CVE-2009-3555&lt;/a&gt;. What was particularly interesting about this issue is that it &lt;em&gt;only&lt;/em&gt; occurred when the application was run using Node.js 20, while Node.js 16 didn't exhibit any issues - so what's different between those two versions? One notable change is that &lt;a href="https://nodejs.org/en/blog/vulnerability/openssl-november-2022"&gt;Node.js 17+ use OpenSSL 3.0 by default&lt;/a&gt; - and starting with 3.0 &lt;a href="https://github.com/openssl/openssl/pull/15127"&gt;secure negotiation support is required by default&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For more information on secure server-side renegotiation I'd highly recommend &lt;a href="https://github.com/openssl/openssl/discussions/21747"&gt;this discussion&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring OpenSSL via the Node.js Driver
&lt;/h2&gt;

&lt;p&gt;A similar issue was reported on Stack Overflow for the &lt;a href="https://www.npmjs.com/package/axios"&gt;&lt;code&gt;axios&lt;/code&gt; library&lt;/a&gt;, and the &lt;a href="https://stackoverflow.com/a/74600467/195509"&gt;solution&lt;/a&gt; there was to pass &lt;code&gt;secureOptions: crypto.constants.SSL_OP_LEGACY_SERVER_CONNECT&lt;/code&gt; during request creation. As &lt;code&gt;secureOptions&lt;/code&gt; is an option passed to Node's &lt;a href="https://nodejs.org/api/tls.html#tlscreatesecurecontextoptions"&gt;&lt;code&gt;tls.createSecureContext&lt;/code&gt;&lt;/a&gt; API (which MongoDB &lt;a href="https://www.mongodb.com/docs/drivers/node/current/fundamentals/connection/tls/#securecontext-example"&gt;documents an example of using&lt;/a&gt;) it should be possible to do something similar with the Node.js driver.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;MongoClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mongodb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;crypto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mongodb+srv://...&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;secureContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;secureOptions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;constants&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SSL_OP_LEGACY_SERVER_CONNECT&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;SUCCESS! The above example allows a &lt;code&gt;SecureContext&lt;/code&gt; object to be created with the &lt;code&gt;secureOptions&lt;/code&gt; selected from the &lt;a href="https://nodejs.org/api/crypto.html#openssl-options"&gt;enumerated OpenSSL options Node.js has defined&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Though the Node.js driver allows direct configuration of the &lt;code&gt;SecureContext&lt;/code&gt; object, as other &lt;a href="https://www.mongodb.com/docs/drivers/"&gt;MongoDB drivers&lt;/a&gt; &lt;em&gt;may not&lt;/em&gt;, &lt;a href="https://jira.mongodb.org/browse/DRIVERS-2823"&gt;DRIVERS-2823&lt;/a&gt; is being considered to ensure this type of configuration is available.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Alternative Configuration
&lt;/h2&gt;

&lt;p&gt;Configuring the MongoDB Node.js driver's OpenSSL options directly is likely the preferred approach, but the Node runtime can also be configured (via &lt;a href="https://nodejs.org/api/cli.html#--openssl-configfile"&gt;&lt;code&gt;--openssl-config=file&lt;/code&gt;&lt;/a&gt;). In this, when the &lt;code&gt;node&lt;/code&gt; process is executed the path to a custom OpenSSL configuration file could be provided as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node &lt;span class="nt"&gt;--openssl-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/path/to/openssl.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where &lt;code&gt;openssl.conf&lt;/code&gt; is setup similar to the example below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nodejs_conf = openssl_init

[openssl_init]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
Options = UnsafeLegacyRenegotiation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>mongodb</category>
      <category>node</category>
      <category>ssl</category>
    </item>
    <item>
      <title>querySrv errors when connecting to MongoDB Atlas</title>
      <dc:creator>Alex Bevilacqua</dc:creator>
      <pubDate>Thu, 29 Feb 2024 15:16:46 +0000</pubDate>
      <link>https://dev.to/alexbevi/querysrv-errors-when-connecting-to-mongodb-atlas-434j</link>
      <guid>https://dev.to/alexbevi/querysrv-errors-when-connecting-to-mongodb-atlas-434j</guid>
      <description>&lt;p&gt;If your application uses MongoDB's &lt;a href="https://www.mongodb.com/docs/drivers/node/current/"&gt;Node.js driver&lt;/a&gt; or &lt;a href="https://mongoosejs.com/"&gt;Mongoose ODM&lt;/a&gt;, occasionally you may observe errors such as &lt;code&gt;querySrv ECONNREFUSED _mongodb._tcp.cluster0.abcde.mongodb.net&lt;/code&gt; or &lt;code&gt;Error: querySrv ETIMEOUT _mongodb._tcp.cluster0.abcde.mongodb.net&lt;/code&gt; being thrown. The MongoDB Atlas documentation outlines several methods to &lt;a href="https://www.mongodb.com/docs/atlas/troubleshoot-connection/"&gt;troubleshoot connection issues&lt;/a&gt;, including how to handle &lt;a href="https://www.mongodb.com/docs/atlas/troubleshoot-connection/#connection-refused-using-srv-connection-string"&gt;"Connection Refused using SRV Connection String"&lt;/a&gt; scenarios, but why does this happen in the first place?&lt;/p&gt;

&lt;h2&gt;
  
  
  About DNS seedlists
&lt;/h2&gt;

&lt;p&gt;To coincide with the release of MongoDB 3.6, all drivers (at the time) implemented the &lt;a href="https://github.com/mongodb/specifications/blob/master/source/initial-dns-seedlist-discovery/initial-dns-seedlist-discovery.rst"&gt;initial DNS seedlist discovery&lt;/a&gt; specification to ensure connections could be established using the new &lt;a href="https://www.mongodb.com/docs/manual/reference/connection-string/#std-label-connections-dns-seedlist"&gt;&lt;code&gt;SRV&lt;/code&gt; connection string format&lt;/a&gt;, as well as the legacy &lt;a href="https://www.mongodb.com/docs/manual/reference/connection-string/#standard-connection-string-format"&gt;standard connection string format&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This functionality was introduced to abstract away the complexity of MongoDB's connection strings (for MongoDB Atlas users at least) by moving the component parts of a &lt;a href="https://www.mongodb.com/docs/manual/reference/connection-string/"&gt;connection string&lt;/a&gt; to two DNS records: a &lt;a href="https://en.wikipedia.org/wiki/SRV_record"&gt;service record (&lt;code&gt;SRV&lt;/code&gt;)&lt;/a&gt; and a &lt;a href="https://en.wikipedia.org/wiki/TXT_record"&gt;text record (&lt;code&gt;TXT&lt;/code&gt;)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Users now only need to supply a connection string such as &lt;code&gt;mongodb+srv://&amp;lt;username&amp;gt;:&amp;lt;password&amp;gt;@cluster0.abcde.mongodb.net/myFirstDatabase&lt;/code&gt;, and regardless as to whether the underlying cluster was a replica set or sharded, the connection string would remain the same. Furthermore, use of &lt;code&gt;mongodb+srv://&lt;/code&gt; enables drivers to detect additions/removals of &lt;code&gt;mongos&lt;/code&gt; in a sharded cluster&lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Tools such as &lt;a href="https://linux.die.net/man/1/nslookup"&gt;&lt;code&gt;nslookup&lt;/code&gt;&lt;/a&gt; or &lt;a href="https://linux.die.net/man/1/dig"&gt;&lt;code&gt;dig&lt;/code&gt;&lt;/a&gt; can be used to view the contents of these DNS records, such as in the following example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;dig srv _mongodb._tcp.cluster0.abcde.mongodb.net

&lt;span class="p"&gt;;&lt;/span&gt; &amp;lt;&amp;lt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; DiG 9.18.18-0ubuntu0.22.04.1-Ubuntu &amp;lt;&amp;lt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; srv _mongodb._tcp.cluster0.abcde.mongodb.net
&lt;span class="p"&gt;;;&lt;/span&gt; global options: +cmd
&lt;span class="p"&gt;;;&lt;/span&gt; Got answer:
&lt;span class="p"&gt;;;&lt;/span&gt; -&amp;gt;&amp;gt;HEADER&lt;span class="o"&gt;&amp;lt;&amp;lt;-&lt;/span&gt; &lt;span class="no"&gt;opcode&lt;/span&gt;&lt;span class="sh"&gt;: QUERY, status: NOERROR, id: 24529
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;_mongodb._tcp.cluster0.abcde.mongodb.net. IN SRV

;; ANSWER SECTION:
_mongodb._tcp.cluster0.abcde.mongodb.net. 60 IN SRV 0 0 27017 cluster0-shard-00-01.abcde.mongodb.net.
_mongodb._tcp.cluster0.abcde.mongodb.net. 60 IN SRV 0 0 27017 cluster0-shard-00-02.abcde.mongodb.net.
_mongodb._tcp.cluster0.abcde.mongodb.net. 60 IN SRV 0 0 27017 cluster0-shard-00-00.abcde.mongodb.net.

&lt;/span&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="sh"&gt;dig txt cluster0.abcde.mongodb.net

; &amp;lt;&amp;lt;&amp;gt;&amp;gt; DiG 9.18.18-0ubuntu0.22.04.1-Ubuntu &amp;lt;&amp;lt;&amp;gt;&amp;gt; txt cluster0.abcde.mongodb.net
;; global options: +cmd
;; Got answer:
;; -&amp;gt;&amp;gt;HEADER&amp;lt;&amp;lt;- opcode: QUERY, status: NOERROR, id: 35223
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;cluster0.abcde.mongodb.net.    IN  TXT

;; ANSWER SECTION:
cluster0.abcde.mongodb.net. 60 IN   TXT "authSource=admin&amp;amp;replicaSet=atlas-abcde-shard-0"
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What can go wrong?
&lt;/h2&gt;

&lt;p&gt;MongoDB's drivers require the information from &lt;em&gt;both&lt;/em&gt; DNS queries in order to successfully establish, authenticate and authorize a connection to a MongoDB Atlas cluster. If either of these queries fail, only part of the connection string details will be present, and if the driver doesn't error out right away, the subsequent connection attempt may be missing necessary information.&lt;/p&gt;

&lt;p&gt;For example, per the &lt;a href="https://github.com/mongodb/specifications/blob/master/source/auth/auth.rst#implementation"&gt;Authentication specification&lt;/a&gt; regarding connection string options, when it comes to selecting the authentication source:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;if &lt;a href="https://www.mongodb.com/docs/manual/reference/connection-string/#mongodb-urioption-urioption.authSource"&gt;&lt;code&gt;authSource&lt;/code&gt;&lt;/a&gt; is specified, it is used.&lt;/li&gt;
&lt;li&gt;otherwise, if database is specified (in the connection string), it is used.&lt;/li&gt;
&lt;li&gt;otherwise, the &lt;code&gt;admin&lt;/code&gt; database is used.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Given this order of operations, if the &lt;code&gt;SRV&lt;/code&gt; record resolves, but the &lt;code&gt;TXT&lt;/code&gt; record &lt;em&gt;doesn't&lt;/em&gt;, assuming the driver doesn't error out first the database provided in the connection string will be used for authentication. Using our original example of &lt;code&gt;mongodb+srv://&amp;lt;username&amp;gt;:&amp;lt;password&amp;gt;@cluster0.abcde.mongodb.net/myFirstDatabase&lt;/code&gt;, the &lt;code&gt;myFirstDatabase&lt;/code&gt; database will be used to authenticate .... which will result in an authentication failure such as &lt;code&gt;MongoServerError: Authentication failed&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Furthermore, though MongoDB's drivers support automatic retryability of &lt;a href="https://www.mongodb.com/docs/manual/core/retryable-reads/"&gt;reads&lt;/a&gt; and &lt;a href="https://www.mongodb.com/docs/manual/core/retryable-writes/"&gt;writes&lt;/a&gt;, these DNS query failures aren't retryable. There is currently a project proposed (&lt;a href="https://jira.mongodb.org/browse/DRIVERS-2757"&gt;DRIVERS-2757&lt;/a&gt;) to improve this in the future, but for now these errors bubble up to the application immediately.&lt;/p&gt;

&lt;h2&gt;
  
  
  Can these issues be prevented?
&lt;/h2&gt;

&lt;p&gt;The best way to avoid these issues entirely is to just use the legacy &lt;a href="https://www.mongodb.com/docs/manual/reference/connection-string/#standard-connection-string-format"&gt;standard connection string format&lt;/a&gt;. If you're connecting to a replica set, the &lt;a href="https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst"&gt;server discovery and monitoring&lt;/a&gt; functionality of each driver will ensure topology changes are automatically discovered.&lt;/p&gt;

&lt;p&gt;Note that this will prevent new &lt;code&gt;mongos&lt;/code&gt;' from being discovered in a sharded cluster, however if you don't anticipate these to change frequently this will likely be a non-issue as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Are other drivers affected?
&lt;/h2&gt;

&lt;p&gt;Failure to resolve DNS records can affect all MongoDB drivers, however it's highly unlikely you'll actually encounter this in a production setting. As there still remains a non-zero chance this issue will manifest, here are some examples of failures you may see from other drivers:&lt;/p&gt;

&lt;h3&gt;
  
  
  Ruby driver
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Mongo::Error::NoSRVRecords: The DNS query returned no SRV records for 'cluster0.abcde.mongodb.net'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Java driver or Spring Boot MongoDB
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Example 1
Caused by: com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting to connect. Client view of cluster state is {type=UNKNOWN, srvResolutionException=com.mongodb.MongoConfigurationException: Unable to look up SRV record for host cluster0-shard-00-01.abcde.mongodb.net, servers=[]}


# Example 2
Caused by: com.mongodb.MongoConfigurationException: Unable to look up SRV record for host cluster0.abcde.mongodb.net
        at com.mongodb.internal.dns.DnsResolver.resolveHostFromSrvRecords(DnsResolver.java:79)
        at com.mongodb.ConnectionString.&amp;lt;init&amp;gt;(ConnectionString.java:321)
        at com.mongodb.MongoClientURI.&amp;lt;init&amp;gt;(MongoClientURI.java:234)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Python driver
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Example 1
pymongo.errors.ConfigurationError: The DNS query name does not exist: _mongodb._tcp.cluster0.abcde.mongodb.net.

# Example 2
Exception has occurred: ConfigurationError
The DNS operation timed out after 20.001205682754517 seconds
dns.exception.Timeout: The DNS operation timed out after 20.001205682754517 seconds
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  C#/.NET driver
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Example 1
cluster0.abcde.mongodb.net IN TXT on x.x.x.x:53 timed out or is a transient error. A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

# Example 2
DnsClient.DnsResponseException:
at DnsClient.LookupClient.ResolveQuery (DnsClient, Version=1.6.0.0, Culture=neutral, PublicKeyToken=4574bb5573c51424)
at DnsClient.LookupClient.QueryInternal (DnsClient, Version=1.6.0.0, Culture=neutral, PublicKeyToken=4574bb5573c51424)
at DnsClient.LookupClient.Query (DnsClient, Version=1.6.0.0, Culture=neutral, PublicKeyToken=4574bb5573c51424)
at MongoDB.Driver.Core.Misc.DnsClientWrapper.ResolveTxtRecords (MongoDB.Driver.Core, Version=2.15.1.0, Culture=neutral, PublicKeyToken=null)
at MongoDB.Driver.Core.Configuration.ConnectionString.Resolve (MongoDB.Driver.Core, Version=2.15.1.0, Culture=neutral, PublicKeyToken=null)
at MongoDB.Driver.MongoUrl.Resolve (MongoDB.Driver, Version=2.15.1.0, Culture=neutral, PublicKeyToken=null)
at MongoDB.Driver.MongoClientSettings.FromUrl (MongoDB.Driver, Version=2.15.1.0, Culture=neutral, PublicKeyToken=null)

# Example 3
System.TimeoutException: A timeout occured after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0200000 } }. Client view of cluster state is { ClusterId : "1", ConnectionMode : "Automatic", Type : "Unknown", State : "Disconnected", Servers : [], DnsMonitorException : "DnsClient.DnsResponseException: Query 54148 =&amp;gt; _mongodb._tcp.cluster0.abcde.mongodb.net IN SRV on x.x.x.x:53 timed out or is a transient error.
 ---&amp;gt; System.Net.Sockets.SocketException (110): Connection timed out
   at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags)
   at DnsClient.DnsUdpMessageHandler.Query(IPEndPoint server, DnsRequestMessage request, TimeSpan timeout)
   at DnsClient.LookupClient.ResolveQuery(IReadOnlyList`1 servers, DnsQuerySettings settings, DnsMessageHandler handler, DnsRequestMessage request, LookupClientAudit audit)
   --- End of inner exception stack trace ---
   at DnsClient.LookupClient.ResolveQuery(IReadOnlyList`1 servers, DnsQuerySettings settings, DnsMessageHandler handler, DnsRequestMessage request, LookupClientAudit audit)
   at DnsClient.LookupClient.QueryInternal(DnsQuestion question, DnsQuerySettings queryOptions, IReadOnlyCollection`1 servers)
   at DnsClient.LookupClient.Query(DnsQuestion question)
   at DnsClient.LookupClient.Query(String query, QueryType queryType, QueryClass queryClass)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  PHP driver
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Fatal error: Uncaught MongoDB\Driver\Exception\InvalidArgumentException: Failed to parse URI options: Failed to look up SRV record "_mongodb._tcp.cluster0.abcde.mongodb.net": The requested name is valid but does not have an IP address.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Go driver
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;error parsing command line options: error parsing uri: lookup cluster0.abcde.mongodb.net on x.x.x.x:53: cannot unmarshal DNS message
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that (per the &lt;a href="https://pkg.go.dev/go.mongodb.org/mongo-driver/mongo#hdr-Potential_DNS_Issues"&gt;MongoDB Go driver documentation&lt;/a&gt;):&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Building with Go 1.11+ and using connection strings with the &lt;code&gt;mongodb+srv&lt;/code&gt; scheme is unfortunately incompatible with some DNS servers in the wild due to the change introduced in &lt;a href="https://github.com/golang/go/issues/10622"&gt;https://github.com/golang/go/issues/10622&lt;/a&gt;. You may receive an error with the message "cannot unmarshal DNS message" while running an operation when using DNS servers that non-compliantly compress SRV records. Old versions of &lt;code&gt;kube-dns&lt;/code&gt; and the native DNS resolver (&lt;code&gt;systemd-resolver&lt;/code&gt;) on Ubuntu 18.04 are known to be non-compliant in this manner. We suggest using a different DNS server (8.8.8.8 is the common default), and, if that's not possible, avoiding the &lt;code&gt;mongodb+srv&lt;/code&gt; scheme.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  DNS is hard ...
&lt;/h2&gt;

&lt;p&gt;It sure can be, as intermittent/transient network events can also impact MongoDB drivers' ability to resolve DNS queries. The drivers (typically) rely on low-level OS APIs (such as &lt;a href="https://linux.die.net/man/3/getaddrinfo"&gt;&lt;code&gt;getaddrinfo&lt;/code&gt;&lt;/a&gt;) for network address and service translation. As such you may occasionally get errors such as &lt;code&gt;MongooseServerSelectionError: getaddrinfo EAI_AGAIN cluster0-shard-00-01.abcde.mongodb.net&lt;/code&gt; even when using the legacy (&lt;code&gt;mongodb://&lt;/code&gt;) URI scheme.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Footnotes&lt;/strong&gt;&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;&lt;small&gt;Sharded clusters could detect additions/removals of &lt;code&gt;mongos&lt;/code&gt;' if the driver(s) have implemented the &lt;a href="https://github.com/mongodb/specifications/blob/master/source/polling-srv-records-for-mongos-discovery/polling-srv-records-for-mongos-discovery.rst"&gt;polling &lt;code&gt;SRV&lt;/code&gt; records for &lt;code&gt;mongos&lt;/code&gt; discovery&lt;/a&gt; specification&lt;/small&gt; ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>mongodb</category>
      <category>dns</category>
      <category>webdev</category>
      <category>node</category>
    </item>
  </channel>
</rss>
