<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yaokai Jiang</title>
    <description>The latest articles on DEV Community by Yaokai Jiang (@yaokaijiang).</description>
    <link>https://dev.to/yaokaijiang</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yaokaijiang"/>
    <language>en</language>
    <item>
      <title>Beyond the Visual Editor: A Developer's Guide to Unlocking Full Control with Momen</title>
      <dc:creator>Yaokai Jiang</dc:creator>
      <pubDate>Wed, 29 Oct 2025 08:59:28 +0000</pubDate>
      <link>https://dev.to/momen_hq/beyond-the-visual-editor-a-developers-guide-to-unlocking-full-control-with-momen-4ef5</link>
      <guid>https://dev.to/momen_hq/beyond-the-visual-editor-a-developers-guide-to-unlocking-full-control-with-momen-4ef5</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jju6jkrctiihlbcfjc0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5jju6jkrctiihlbcfjc0.jpeg" width="800" height="450"&gt;&lt;/a&gt;Momen's Product Architecture&lt;/p&gt;
&lt;p&gt;&lt;a href="http://momen.app" rel="noopener noreferrer"&gt;Momen&lt;/a&gt;'s development team has always held the view that, regardless of code or no-code, developers always need control over their product. This developer-first mindset is embedded in &lt;a href="https://momen.app/blogs/cost-efficient-no-code-full-stack-platform/" rel="noopener noreferrer"&gt;Momen's architecture&lt;/a&gt;, and has greatly influenced the product choices we made, and will continue to shape how we think about &lt;a href="https://momen.app/product/category/feature" rel="noopener noreferrer"&gt;our product.&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;The No-Code Ceiling and the Pro-Code Escape Hatch&lt;/h2&gt;
&lt;p&gt;As professional developers, we know that the appeal of no-code platforms is often tempered by a significant concern: the "no-code ceiling". These platforms promise incredible initial velocity, allowing for the rapid creation of prototypes and MVPs. However, this speed can come at the cost of control, leading to a point where custom logic, bespoke user interfaces, or deep, non-standard integrations become prohibitively difficult or impossible. This is often also known as the dreaded "vendor lock-in", forcing teams into complex workarounds or, in the worst cases, a complete and costly rebuild on a traditional code stack.&lt;/p&gt;
&lt;p&gt;At Momen, we believe we ought to create a platform that offers the ability to "Collaborate with Code" and "Complete Visibility", ensuring developers can build complex, scalable applications without ever hitting a ceiling.&lt;/p&gt;
&lt;h2&gt;The Universal Data Layer: Seamless Integration with a Self-Documenting GraphQL API&lt;/h2&gt;
&lt;p&gt;The single most important architectural decision that underpins Momen's power and flexibility for developers is its GraphQL-first foundation. This is not an optional feature or an afterthought. It is the core of the platform's data layer, as all backend requests use GraphQL (with the exception of webhooks). This choice provides a universal, intelligent, and highly efficient interface for all data interactions, enabling the advanced capabilities of the other control features.&lt;/p&gt;
&lt;p&gt;The structured nature of GraphQL is what makes the Log Service so detailed and queryable. The query function available to custom React Components is a GraphQL client, allowing them to efficiently fetch precisely the data they need. The ability to embed complex GraphQL queries within server-side JavaScript gives developers unparalleled power for data manipulation. In essence, in the Momen world, GraphQL is the foundation upon which we built for visibility and the ability to collaborate with code.&lt;/p&gt;
&lt;h3&gt;Documentation That Never Lies&lt;/h3&gt;
&lt;p&gt;One of the most persistent frustrations for developers is out-of-date API documentation. GraphQL solves this problem by being &lt;strong&gt;inherently self-documenting&lt;/strong&gt;. The GraphQL schema—a strongly typed definition of all available data, queries, and mutations—serves as a live, machine-readable contract that is always accurate.&lt;/p&gt;
&lt;p&gt;Developers can use interactive tools like GraphiQL to explore the &lt;a href="https://docs.momen.app/data/api/" rel="noopener noreferrer"&gt;entire API&lt;/a&gt;, discover available fields, and build queries with auto-completion and real-time validation. This fundamentally changes the developer onboarding and collaboration workflow. A new team member can be pointed to the API endpoint and immediately understand its full capabilities without reading static documents. When a data model is changed in Momen's visual editor, the schema updates instantly, making the change discoverable by the entire team. This creates a low-friction, agile development process that directly supports Momen's promise of "Rapid Iteration" and effective "Team Collaboration".&lt;/p&gt;
&lt;h3&gt;Momen as a Headless Backend (BaaS)&lt;/h3&gt;
&lt;p&gt;The ultimate expression of control and flexibility is the ability to use Momen's powerful backend independently of its UI builder. Because Momen exposes a public, standards-compliant GraphQL API, it can function as a complete Backend-as-a-Service (BaaS) for any application. This means developers are never locked into Momen's frontend tools and have the freedom to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Build a native iOS or Android app that uses Momen for its entire backend, including data storage, user authentication, and business logic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Develop a web application using a different frontend framework like Vue.js or Svelte, while leveraging Momen for its scalable backend infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrate Momen as a microservice within a larger corporate architecture, where other systems can call its GraphQL API.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This capability directly refutes the notion of vendor lock-in and positions Momen as a powerful, flexible, and scalable backend that can be consumed by any client, anywhere.&lt;/p&gt;
&lt;h2&gt;Server-Side Superpowers: Injecting Custom JavaScript Logic into Actionflow&lt;/h2&gt;
&lt;p&gt;Momen's Actionflow is a powerful visual tool for building server-side workflows and automations. While its collection of built-in nodes can handle a wide array of tasks, there are scenarios where visual logic reaches its limit. For proprietary business algorithms, complex data transformations, or integrations with systems that lack a modern API, developers need the ability to write code. Momen provides this capability through the&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;"Custom Code"&lt;/strong&gt; node, which allows for the injection of arbitrary JavaScript directly into a server-side Actionflow.&lt;/p&gt;
&lt;p&gt;This feature almost turns Momen into an integrated, lightweight alternative to standalone Function-as-a-Service (FaaS) platforms like AWS Lambda or Google Cloud Functions for a wide range of use cases. Developers can write, deploy, and trigger server-side code entirely within the Momen ecosystem, avoiding the operational overhead and context-switching required to manage separate cloud infrastructure.&lt;/p&gt;
&lt;p&gt;Current support is limited to V8 environments, no NodeJs support yet.&lt;/p&gt;
&lt;h3&gt;The context Bridge: Interfacing with the Momen Environment&lt;/h3&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7tpma4y7mky1x434g46k.png" width="800" height="441"&gt;Custom JavaScript code within an Actionflow in Momen's editor&lt;p&gt;Custom JavaScript code within an Actionflow does not execute in isolation. It is a first-class citizen of the workflow, interacting with other nodes and the broader Momen environment through a global context object. This object serves as a bridge, enabling seamless data flow:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Getting Inputs:&lt;/strong&gt; To access data passed in by previous nodes or the Actionflow's initial trigger, developers use the function context.getArg("input_parameter_name");.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Returning Outputs:&lt;/strong&gt; To pass the results of the custom logic to subsequent nodes, developers use context.setReturn("output_name", result_variable);.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manipulating Backend: &lt;/strong&gt;To access data from the project's database, run other actionflows, invoke third-party APIs, developers use context.runGql("operationName", query, variables, { role });.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For example, a simple yet practical use case from the documentation involves masking a phone number. The Actionflow receives a full phone number as an input, a "Custom Code" node uses context.getArg('phone_number') to retrieve it, performs a string manipulation with JavaScript, and then uses context.setReturn('result_phone', result) to make the masked number available to the rest of the flow.&lt;/p&gt;
&lt;h3&gt;Actionflow as a Programmable, Serverless Backend&lt;/h3&gt;
&lt;p&gt;&lt;a href="https://docs.momen.app/actions/actionflow/custom_code/" rel="noopener noreferrer"&gt;The "Custom Code" node&lt;/a&gt;, combined with Actionflow's other features, creates a robust, programmable, serverless environment. Developers can orchestrate complex sequences that mix visual configuration with custom code, including database operations, external API calls, AI agent execution, and permission management. This is supported by several powerful platform capabilities:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Execution Modes:&lt;/strong&gt; Developers can choose &lt;strong&gt;Synchronous&lt;/strong&gt; execution for tasks that need strong consistency or &lt;strong&gt;Asynchronous&lt;/strong&gt; for long-running background jobs that should not block the main thread.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Transactional Integrity:&lt;/strong&gt; Critically, for synchronous flows, Momen guarantees that if any node fails—including a custom code block—All data changes to the database will be rolled back. This provides the atomic, all-or-nothing guarantee required for building reliable transactional logic, such as financial transfers or multi-step order processing. This is a feature that is missing in almost all current no-code solutions in the market.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Versatile Triggers:&lt;/strong&gt; Workflows can be initiated by frontend events, scheduled via Cron expressions, triggered by database changes, or invoked by external systems through webhooks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Embedded GraphQL:&lt;/strong&gt; For ultimate data manipulation power, a code node can contain raw GraphQL queries, enabling arbitrarily complex data manipulation, such as batch operations, directly on the server.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Higher plan tiers have higher limits on number of automated executions. In addition, for projects on single-tenant servers, Momen entirely removes the timeout limit on Actionflow execution and the limit on the number of automated executions, making it suitable for heavy-duty data processing.&lt;/p&gt;
&lt;h2&gt;Infinite Frontend Flexibility: Building Bespoke UIs with Custom React Components&lt;/h2&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsl6kch6gglo3c4x767k7.png" width="800" height="441"&gt;&lt;p&gt;While Momen provides a rich library of pre-built UI elements, it recognizes that true product differentiation often lies in a unique and highly polished user experience. For developers who need to move beyond standard widgets, Momen offers &lt;a href="https://docs.momen.app/account_community/code_component/" rel="noopener noreferrer"&gt;&lt;strong&gt;Code Components&lt;/strong&gt;&lt;/a&gt;, providing an escape hatch to infinite frontend flexibility by allowing the creation and integration of custom React components written in TypeScript. This capability is the ultimate answer to UI limitations, empowering developers to build anything from data-driven charts and advanced animations to interactive maps and proprietary design systems.&lt;/p&gt;
&lt;h3&gt;A Professional and Familiar Workflow&lt;/h3&gt;
&lt;p&gt;Momen deeply respects established developer practices by integrating the Code Component workflow seamlessly with standard tools and command-line habits. Instead of trapping developers in a walled garden, it provides a professional and familiar development loop:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Setup:&lt;/strong&gt; Developers begin by installing the Momen CLI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Initialization:&lt;/strong&gt; The command momen create project_name scaffolds a complete React and TypeScript project, including a template component, ready for development in any IDE like VS Code or Cursor.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Local Development:&lt;/strong&gt; Running npm run dev starts a local development server for isolated component building and testing, exactly as one would in any standard React project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Live Preview:&lt;/strong&gt; By executing&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;npm run preview&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
, developers can see their locally running component rendered &lt;em&gt;live&lt;/em&gt; inside the Momen editor. This instantly closes the feedback loop, solving the critical "context gap" by showing how the component behaves with real Momen data sources and actions, dramatically accelerating development and debugging.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Publishing:&lt;/strong&gt; Once complete, the developer updates the version number in package.json (adhering to semantic versioning) and runs&lt;br&gt;
&lt;br&gt;
 &lt;code&gt;momen publish&lt;/code&gt;&lt;br&gt;
&lt;br&gt;
. This makes the new component and version available for use across Momen projects.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;The Component Contract: Props, State, and Events&lt;/h3&gt;
&lt;p&gt;The integration between custom code and the no-code editor is managed through a well-defined and strongly-typed "contract". Every Code Component must export a specific set of TypeScript interfaces, which allows Momen's visual editor to automatically parse its properties and expose them in the configuration panel. This creates a robust and predictable system for two-way communication.&lt;/p&gt;
&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;colgroup&gt;
&lt;col&gt;
&lt;col&gt;
&lt;/colgroup&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;Interface / API&lt;/p&gt;&lt;/td&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;Description&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;[ComponentName]PropData&lt;/p&gt;&lt;/td&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;Defines the data properties passed &lt;em&gt;from&lt;/em&gt; the Momen editor &lt;em&gt;to&lt;/em&gt; the component (e.g., title: string).&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;[ComponentName]StateData&lt;/p&gt;&lt;/td&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;Defines the component's internal state to be exposed &lt;em&gt;back to&lt;/em&gt; the Momen platform for use in other actions.&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;[ComponentName]Event&lt;/p&gt;&lt;/td&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;Defines the events the component can emit (e.g., onConfirm), which can be connected to Momen Actionflows.&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;query&lt;/p&gt;&lt;/td&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;A context API function to make backend GraphQL requests to fetch or mutate data.&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;navigate&lt;/p&gt;&lt;/td&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;A context API function to trigger page navigation within the parent Momen application.&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;globalData / pageData&lt;/p&gt;&lt;/td&gt;
&lt;td colspan="1" rowspan="1"&gt;&lt;p&gt;Context API objects to access global and page-level variables defined in the Momen editor.&lt;/p&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;p&gt;This structured approach enables powerful use cases, such as building a custom charting component that fetches its data from the Momen backend via the query API and emits an onDataPointClick event that triggers a Momen Actionflow. It also allows for the integration of any third-party npm package, from data visualization libraries to specialized form controls, giving developers access to the entire React ecosystem.&lt;/p&gt;
&lt;h2&gt;Unprecedented Observability: Mastering Application Behavior with Fully Searchable Logs&lt;/h2&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc74256boawpaebj92dh.png" width="800" height="441"&gt;&lt;p&gt;In many managed platforms, the application's inner workings remain an opaque black box, making debugging a frustrating exercise in guesswork. Momen fundamentally rejects this paradigm by committing to "Complete Visibility" through its comprehensive Log Service. This service transforms the application from a black box into a fully transparent glass box, providing developers with the deep observability needed to monitor, debug, and optimize application behavior with precision.&lt;/p&gt;
&lt;h3&gt;A Centralized Nervous System for Your App&lt;/h3&gt;
&lt;p&gt;The power of the &lt;a href="https://docs.momen.app/deployment/log_service/" rel="noopener noreferrer"&gt;Log Service&lt;/a&gt; stems from its comprehensive scope, designed to "capture all key information within the system". This is a direct architectural benefit of Momen's unified full-stack nature, where frontend interactions, backend logic, and database operations are managed in a single, cohesive environment. The service provides detailed, categorized logs for every critical layer of the application stack, including:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Actionflow:&lt;/strong&gt; Traces the execution of each node in server-side workflows.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Database Operations:&lt;/strong&gt; Records every create, read, update, and delete (CRUD) operation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AI:&lt;/strong&gt; Captures the full context of AI agent conversations and model interactions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API:&lt;/strong&gt; Logs all requests to both internal and external APIs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Trigger:&lt;/strong&gt; Records the activation of scheduled, database, or webhook-based triggers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gateway:&lt;/strong&gt; Monitors all HTTP requests to the backend.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deployment and Release:&lt;/strong&gt; Tracks every backend release and deployment event.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GraphQL:&lt;/strong&gt; Provides detailed logs for all backend GraphQL requests.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This holistic logging is not an afterthought; it is an emergent property of the platform's foundational architecture. Because all backend requests use GraphQL, the data flowing through the system is inherently structured and predictable. This makes it possible to generate richly detailed, queryable logs for every transaction, turning the promise of "Complete Visibility" into a tangible reality.&lt;/p&gt;
&lt;h3&gt;Surgical Precision in Debugging&lt;/h3&gt;
&lt;p&gt;Momen's Log Service is far more than a simple stream of text; it is a powerful, searchable database designed for developers. It features a sophisticated query syntax that allows for surgical debugging and analysis. Developers can move beyond simple keyword searches and employ a range of operators to rapidly isolate issues:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Field Queries:&lt;/strong&gt; Target specific fields, such as requestMethod: POST.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wildcard Searches:&lt;/strong&gt; Use wildcards for fuzzy matching, like traceId: 20241024*.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Numeric Range Queries:&lt;/strong&gt; Filter by value ranges, for instance, responseSize: [5000 to 10000].&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Boolean Logic:&lt;/strong&gt; Combine conditions with and / or for complex queries, such as account and status: 200.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Nested Field Queries:&lt;/strong&gt; Drill down into JSON objects, a common necessity for modern APIs, with queries like requestBody.query: mutation*.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A standout feature is the universal traceId, a hallmark of enterprise-grade systems. This identifier allows a developer to follow a single user request end-to-end across the entire system—from the initial frontend-initiated request captured in a Gateway log, through the business logic in an Actionflow log, to the final write in a Database Operations log. This capability is indispensable for debugging complex, multi-step processes like payment flows or AI-driven workflows, providing definitive proof that Momen is engineered for serious, scalable applications that can support millions of users from day one.&lt;/p&gt;
&lt;h2&gt;The Road Ahead: Momen's Vision for Even More Developer Control&lt;/h2&gt;
&lt;p&gt;While Momen already provides a powerful suite of tools for its developers, we are still very far from what we envision. Here is a non-exhaustive list of planned features, aimed to even better address key areas of the development lifecycle, further bridging the gap between no-code speed and the level of control that all developers desire.&lt;/p&gt;
&lt;h3&gt;Interactive Debugging with Breakpoints and State Inspection&lt;/h3&gt;
&lt;p&gt;Momen plans to move beyond log-based analysis with a true &lt;strong&gt;In-App Debugger.&lt;/strong&gt; Currently, while frontend has a crude debugger built in Mirror, debugging backend relies on analyzing detailed logs and inspecting network requests, a process that is powerful but reactive. The upcoming debugger is set to introduce the ability to pause and inspect your app's state at any moment, allowing developers to set breakpoints, examine variables, and step through logic in real-time. This feature, which is a standard in professional IDEs, will provide the granular control needed to fine-tune complex application behavior and dramatically accelerate troubleshooting.&lt;/p&gt;
&lt;h3&gt;Real-Time System Observability&lt;/h3&gt;
&lt;p&gt;Building on the principle of "Complete Visibility," Momen aims to provide deeper insights into application performance with a real-time activity monitor. Inspired by tools like macOS's Activity Monitor, this feature would offer a live dashboard of system health, showing active tasks, CPU and memory usage, and other performance metrics. This represents a significant evolution from the current Log Service, which excels at post-mortem analysis, to a proactive monitoring tool that gives developers a live pulse on their application's operational status and resource consumption.&lt;/p&gt;
&lt;h3&gt;Advanced Version Control and Release Management&lt;/h3&gt;
&lt;p&gt;To support more complex team-based development, Momen is looking to implement a more sophisticated version control system. Currently, the platform lacks the branching and merging capabilities found in mature competitors. The introduction of a Git-style workflow would be a transformative addition, enabling parallel development, feature experimentation in isolated branches, and structured release pipelines. This would elevate Momen's collaboration features to align with the best practices used by professional software teams worldwide, moving beyond simple versioning for individual assets like Actionflows.&lt;/p&gt;
&lt;h3&gt;A Versioned Database&lt;/h3&gt;
&lt;p&gt;As applications grow, the challenge often shifts from managing schema changes to understanding the history of the data itself. For auditing, debugging complex state transitions, or performing data forensics, simply knowing &lt;em&gt;when&lt;/em&gt; a record was last updated is insufficient. The critical need is to answer: "What did this specific row look like at a previous point in time?"&lt;/p&gt;
&lt;p&gt;To address this, the vision is to implement a true row-level versioning system, enabling Point-in-Time Recovery (PITR) for every individual record. &lt;code&gt;UPDATE&lt;/code&gt; or &lt;code&gt;DELETE&lt;/code&gt; operation would not discard the old data; instead, it would archive the previous version of the row into a history table, marking the exact period it was valid.() This creates a complete, immutable audit trail for every piece of data in the application.&lt;/p&gt;
&lt;p&gt;Such a system would allow developers to "time travel" through their data, querying the state of any row &lt;code&gt;AS OF&lt;/code&gt; a specific timestamp. This granular recovery and analysis capability is invaluable for safely rolling back incorrect data modifications, understanding data evolution without restoring an entire database backup, and building applications with institutional-grade auditability. Momen's foundation on a native, high-performance PostgreSQL database provides the ideal architecture to build this powerful feature, abstracting the complexities of manual versioning into a seamless platform capability.&lt;/p&gt;
&lt;h3&gt;Dedicated Runtimes with Node.js&lt;/h3&gt;
&lt;p&gt;For developers requiring maximum control, the future involves offering a fully-fledged, Node.js-compatible Custom Code runtime environment for projects running on single-tenant servers. This would empower developers to move beyond isolated JavaScript snippets in Actionflow and deploy entire server-side applications using the familiar Node.js ecosystem. This runtime would extend the current ACID guarantees offered in the Synchronous Actionflows. This would offer the ultimate level of backend control: the full power and flexibility of a standard Node.js environment combined with the reliability and managed nature of Momen's transactional, PostgreSQL-backed data layer.&lt;/p&gt;
&lt;h2&gt;Conclusion: True Control is the Freedom to Choose&lt;/h2&gt;
&lt;p&gt;The four pillars of developer control—GraphQL, JavaScript logic, React Components, and Logs —demonstrate that Momen is engineered to be more than just a no-code platform. It is a productivity platform that respects developer expertise and provides the tools necessary to build sophisticated, scalable, and truly custom applications.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A Universal Data Layer&lt;/strong&gt; via the self-documenting GraphQL API offers efficient, flexible integration and the option to use Momen as a headless backend.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Server-Side Superpowers&lt;/strong&gt; via JavaScript in Actionflow allow for the implementation of any proprietary business logic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Infinite Frontend Flexibility&lt;/strong&gt; via &lt;a href="https://momen.app/blogs/interop-between-code-and-no-code-using-code-components-to-add-echart-in-your-project/" rel="noopener noreferrer"&gt;Code Components&lt;/a&gt; ensures that the UI is never a limitation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unprecedented Observability&lt;/strong&gt; via the Log Service provides the transparency needed for robust debugging and monitoring.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Momen is not "anti-code". It is "pro-productivity". The platform automates the vast majority of development that is repetitive and undifferentiated—such as CRUD operations, deployment pipelines, and &lt;a href="https://momen.app/blogs/cost-efficient-no-code-full-stack-platform/" rel="noopener noreferrer"&gt;server scaling&lt;/a&gt;—freeing developers to focus their unique skills on the rest that creates competitive advantage. It offers the velocity of visual development without the constraints, creating &lt;a href="https://momen.app/blogs/ensuring-consistency-in-no-code-apps/" rel="noopener noreferrer"&gt;a powerful environment &lt;/a&gt;where you can build faster without ever hitting a wall.&lt;/p&gt;

</description>
      <category>powerfuldevs</category>
      <category>devops</category>
      <category>architecture</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Is AI Coding Making No-Code Obsolete? The Plot Twist Nobody Saw Coming 🤖</title>
      <dc:creator>Yaokai Jiang</dc:creator>
      <pubDate>Wed, 25 Dec 2024 09:30:28 +0000</pubDate>
      <link>https://dev.to/momen_hq/is-ai-coding-making-no-code-obsolete-the-plot-twist-nobody-saw-coming-4fpk</link>
      <guid>https://dev.to/momen_hq/is-ai-coding-making-no-code-obsolete-the-plot-twist-nobody-saw-coming-4fpk</guid>
      <description>&lt;p&gt;Just wrapped up a livestream discussion about the future of development, and I had to share these insights with the Dev community.&lt;/p&gt;

&lt;p&gt;Livestream recap here: &lt;br&gt;
&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://www.youtube.com/live/GpwFNixtkYc?si=_wi1jOvUznVCJRJh" rel="noopener noreferrer"&gt;
      youtube.com
    &lt;/a&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  The Key Revelation 🔍
&lt;/h2&gt;

&lt;p&gt;Everyone's been asking if AI coding tools like Bolt.new, Windsurf, and Cursor will make no-code obsolete. After diving deep into this topic, here's what became crystal clear: we're not witnessing a battle, we're watching a merger.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Development Evolution:
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Code → No-Code → AI Coding → No-Code + AI&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Here's What We Uncovered:
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1️⃣ The False Competition
&lt;/h3&gt;

&lt;p&gt;The biggest misconception? That AI coding and no-code are competing forces. Our discussion revealed they're actually complementary technologies heading toward convergence.&lt;/p&gt;

&lt;h3&gt;
  
  
  2️⃣ The Present Reality
&lt;/h3&gt;

&lt;p&gt;From our live discussion, we found that developers using both AI coding tools AND no-code platforms are seeing exponential productivity gains. It's not about choosing sides - it's about leveraging both.&lt;/p&gt;

&lt;h3&gt;
  
  
  3️⃣ The Emerging Pattern
&lt;/h3&gt;

&lt;p&gt;Here's what's fascinating: the most successful teams aren't picking between AI coding and no-code. They're strategically combining both to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accelerate development cycles&lt;/li&gt;
&lt;li&gt;Reduce technical debt&lt;/li&gt;
&lt;li&gt;Enable more creative solutions&lt;/li&gt;
&lt;li&gt;Empower both developers and non-developers&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4️⃣ The Real Future
&lt;/h3&gt;

&lt;p&gt;It's becoming clear that the future isn't about AI replacing no-code or vice versa. It's about their synthesis creating something entirely new.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Big Takeaways 💡
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;AI coding tools are enhancing, not replacing, no-code platforms&lt;/li&gt;
&lt;li&gt;The "versus" narrative is missing the bigger picture&lt;/li&gt;
&lt;li&gt;Teams leveraging both technologies are seeing the best results&lt;/li&gt;
&lt;li&gt;We're at the beginning of a new development paradigm&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What This Means For You
&lt;/h2&gt;

&lt;p&gt;Whether you're a traditional developer, no-code enthusiast, or somewhere in between, this evolution opens up new possibilities. The question isn't "which side should I choose?" but rather "how can I leverage both?"&lt;/p&gt;

&lt;h2&gt;
  
  
  Your Turn 🎤
&lt;/h2&gt;

&lt;p&gt;What's your experience with this convergence? As a developer, how you use these AI coding tools? Share your insights below!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>learning</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Momen vs. Bubble Comparison: An Honest Look at Features and Performance</title>
      <dc:creator>Yaokai Jiang</dc:creator>
      <pubDate>Tue, 26 Nov 2024 06:05:01 +0000</pubDate>
      <link>https://dev.to/momen_hq/momen-vs-bubble-comparison-an-honest-look-at-features-and-performance-37ec</link>
      <guid>https://dev.to/momen_hq/momen-vs-bubble-comparison-an-honest-look-at-features-and-performance-37ec</guid>
      <description>&lt;p&gt;In the landscape of no-code development, Bubble has long been the reigning monarch for given its long history and powerful features. Recently, Momen was launched on &lt;a href="https://www.producthunt.com/products/momen" rel="noopener noreferrer"&gt;Product Hunt&lt;/a&gt;, and was voted as #1 product of the day on Nov 17th. They both aim at solving the same issue: enabling rapid application development without traditional coding. While both platforms target similar audiences - ambitious but potentially non-technical founders - their philosophical and technical approaches differ significantly.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ZOjAL07Jf6s"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Philosophical differences
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Bubble: Reinventing Development for Accessibility
&lt;/h3&gt;

&lt;p&gt;Bubble has taken a bold approach by reimagining many traditional development concepts. Their philosophy centers on making web development more accessible, even at the cost of potentially alienating users with partial industry experience (designers) and rendering bubble skills less transferrable by departing from industry standards. This is evident in their custom terminology - for instance, referring to database rows as "things" - and their NoSQL-like approach to data management, despite using PostgreSQL under the hood. Its prioritizes ease of comprehension over standard compliance or higher upper limits of capability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Momen: Bridging the Code/No-Code Divide
&lt;/h3&gt;

&lt;p&gt;Momen rejects what they see as a false dichotomy &lt;a href="https://momen.app/blogs/interop-between-code-and-no-code-using-code-components-to-add-echart-in-your-project/" rel="noopener noreferrer"&gt;between code and no-code development&lt;/a&gt;. Instead, it embrace industry-standard practices and terminology, believing that fundamental software development principles - such as fail-fast approaches, quick feedback loops, and DRY (Don't Repeat Yourself) - remain valuable regardless of the development method. This philosophy manifests in its more direct exposure of PostgreSQL capabilities and their use of familiar industry terms and patterns.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy22h0saj6venjt96pcdh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy22h0saj6venjt96pcdh.png" alt=" " width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Use Cases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  When to Choose Bubble
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Team-Based Development
&lt;/h4&gt;

&lt;p&gt;Teams benefit from Bubble's more robust collaboration features, especially Git-like branching and merging capabilities. A clear staging/production separation is also essential for rapid iteration while keeping production stable. These features work in concert to enable teams of developers follow a development process similar to what has been proven to work in the traditional software engineering world.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Mobile-First Projects
&lt;/h4&gt;

&lt;p&gt;Bubble has iOS and Android support as of this writing. While support for both platforms are still in beta, missing integration of many native functionalities, their react native-based implementation should be quite solid when it emerges from beta. Given the similarities between web and react native, Bubble's solution should be quite accessible to their existing user and be no more difficult than their web offering to new users.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6a244pwzcogjvfpungl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6a244pwzcogjvfpungl.png" alt=" " width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  When to Choose Momen
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Data-Intensive Applications
&lt;/h4&gt;

&lt;p&gt;Momen has an architectural emphasis on &lt;a href="https://momen.app/blogs/why-we-choose-postgresql-for-momen/" rel="noopener noreferrer"&gt;database performance&lt;/a&gt;. A project built on Momen can process up to 5000 rows per second compared to Bubble's 100. Its native PostgreSQL integration provides ACID compliance in multi-step Actionflows (Momen's name for backend workflows) and access to lower level primitives such as constraints (foreign keys, unique, non-null, etc...), making it ideal for applications requiring strong &lt;a href="https://momen.app/blogs/ensuring-consistency-in-no-code-apps/" rel="noopener noreferrer"&gt;data consistency&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. API-First Projects
&lt;/h4&gt;

&lt;p&gt;Projects needing to be integrated by other systems benefit from Momen's out-of-the-box GraphQL APIs and comprehensive permission system. GraphQL's introspection API enables developers to easily comprehend the functionality of the API while its permission system supporting both RBAC and ABAC gives fine grained control on every piece of data / functionality, making it well-suited for API-centric architectures.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Projects needing more Developer Control
&lt;/h4&gt;

&lt;p&gt;Momen caters to projects that need to be run on specific environment (e.g. Designated geographical region, on-premises) with bring-your-own-cloud deployment options. Native React component support enables custom development when needed, while maintaining no-code accessibility. Existence of Javascript code blocks in Actionflow allows easy integration of custom logic that may be cumbersome to express visually. Extensive logging support for all parts of the system significantly increases observability so that when things inadvertently go wrong, debugging can be made efficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  Feature Table
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza3i7yohnzw5kuw7h6l0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fza3i7yohnzw5kuw7h6l0.png" alt=" " width="726" height="709"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Development Experience and Interface Design
&lt;/h2&gt;

&lt;p&gt;Bubble and Momen have taken distinctive approaches to their visual editor interfaces. They all roughly divide their editor into sections handling data, logic, and user interface. They differ in their terminology and user experience philosophy. Bubble has developed its own unique nomenclature, for example referring to database rows as "things", while Momen adheres more closely to industry-standard terminology.&lt;/p&gt;

&lt;p&gt;In terms of frontend development, Bubble currently holds a significant advantage, especially in terms of reusability. Developers can create reusable styles, components, and workflows, making it easier to maintain consistency across larger applications. Momen currently lacks frontend reusability features, though this functionality is actively under development and scheduled for release by the end of 2024. However, Momen does offer real-time frontend preview capabilities, allowing developers to see changes instantly without refreshing – a feature not available in Bubble.&lt;/p&gt;

&lt;p&gt;When it comes to version control and collaboration, Bubble provides a more mature offering with support for branching and merging, which Momen currently lacks. Both platforms support collaborative editing, but Bubble's branching and versioning capabilities give it an edge for team development. Bubble offers a frontend debugger, which Momen also lacks. Both platforms provide a logging system, with Momen offering more flexible log filtering including range queries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74fx9d25wzcvu3fkzzut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F74fx9d25wzcvu3fkzzut.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Capabilities
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Frontend Development
&lt;/h3&gt;

&lt;p&gt;Both platforms provide standard HTML components for frontend development, though Bubble offers a more extensive component library including shapes, icons, and maps. For responsive design, both platforms leverage modern layout techniques, with Momen exclusively using flexbox while Bubble supports both block and flexbox layouts. Custom code integration differs between the platforms: Bubble uses a plugin system based on HTML and plain JavaScript, while Momen supports React components through its code component feature. Both platforms offer comprehensive SEO features out of the box.&lt;/p&gt;

&lt;p&gt;At runtime, Bubble is exclusively server-rendered. So each page visit is a roundtrip to Bubble's server. Momen is by default client-rendered, but it can be server-rendered or statically server-generated. This also means websites built in Momen are naturally single-page application(SPA)s, something Bubble cannot handle, only mimic, via conditional views.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backend Architecture
&lt;/h3&gt;

&lt;p&gt;The platforms are built on PostgreSQL, but their approaches to database management differ significantly. Bubble appears to abstract away from traditional database structures, likely using JSONB storage, which eliminates the need for migrations but comes with limitations. It lacks true 1:N relations, ACID guarantees in backend workflows, and support for atomic operations like increment/decrement. It also significantly hinders PostgreSQL's optimizer proper operation as JSONB does not have built-in statistics support, making queries and updates slower. Momen takes a different approach, providing a thin wrapper around PostgreSQL that exposes most of its native functionality.&lt;/p&gt;

&lt;p&gt;Neither platform currently supports direct looping in their workflow engines. Both handle complex permissions well for database operations. Momen extends its attribute-based-access-control (ABAC) permission system to cover third-party APIs, backend workflows (Actionflows), and AI interactions. API creation also differs significantly: Bubble's default APIs are not human-readable, one has to specifically enable each table to be exposed in a human-readable fashion, while Momen generates GraphQL APIs that are more developer-friendly. In addition, if not more importantly, Momen's GraphQL API allows fetching only required data and also related data in one round-trip. For example, a list of posts (only the title and creation time, ignoring update time) with its last 5 viewers (username and avatar only), its author (username, avatar and creation time) and its comment count, can be fetched in a single round trip. This significantly cuts down latency, resulting in faster loading pages. At the same time, it reduces load on the backend, as unnecessary fields are not even fetched from the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance and Scalability
&lt;/h3&gt;

&lt;p&gt;Performance characteristics vary significantly between the platforms. Bubble's backend workflows typically process around 100 rows per second, while Momen's approach of pushing as work as possible directly to the database achieves up to 5,000 rows per second. This is especially significant for importing large amounts of data. Neither platform currently offers built-in backend caching controls.&lt;/p&gt;

&lt;p&gt;For infrastructure, Bubble offers dedicated instances under their control, while Momen provides both single-tenant servers (Momen-controlled) and bring-your-own-cloud deployment (customer controlled) options. Bubble's dedicated instances start at around $3,500 per month. Financially, this represents a clear incentive to upgrade when applications exceed 22 million WU (Workflow Units) per month, equivalent to 44 million database inserts or updates. Momen's scaling model is based on requests per second (RPS), with single-tenant instances providing additional 100-200 RPS each, up to a self-service maximum of 32 instances (3,200-6,400 RPS).&lt;/p&gt;

&lt;h3&gt;
  
  
  Platform Support and Future Development
&lt;/h3&gt;

&lt;p&gt;As of this writing, both platforms support web development, with Bubble offering beta versions of iOS and Android apps. Momen plans to begin mobile platform development in 2025. Bubble has announced plans for offline data support and notifications for their native apps, though the full feature set is still being determined.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Analysis
&lt;/h3&gt;

&lt;p&gt;The pricing models of both platforms reflect their different approaches to scalability. Bubble uses a Workflow Unit (WU) based pricing model, with costs calculated based on various operations such as database queries, API calls, and plugin usage. Their pricing ranges from free tier to enterprise-level dedicated instances at $3,500 monthly. &lt;a href="https://momen.app/pricing" rel="noopener noreferrer"&gt;Momen's pricing&lt;/a&gt; is structured around RPS and resource usage, with options for both shared and single-tenant infrastructure. Their single-tenant solutions start at $120 per month per instance, with additional costs based on storage and outflow requirements. For more detailed analysis of hypothetical scenarios, please checkout the calculator spreadsheets:&lt;/p&gt;

&lt;p&gt;Momen: &lt;a href="https://docs.google.com/spreadsheets/d/1QQ-zkiUGGDOmJaYADjP0zH9WoyOurAQ5Zaxjv2AJ-Cg/edit?gid=0#gid=0" rel="noopener noreferrer"&gt;https://docs.google.com/spreadsheets/d/1QQ-zkiUGGDOmJaYADjP0zH9WoyOurAQ5Zaxjv2AJ-Cg/edit?gid=0#gid=0&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Bubble: &lt;a href="https://docs.google.com/spreadsheets/d/1H0wbV_W2PI3GI6BtxkvRgGQkHjQ0FtGpReuN9-ZGzII/edit?gid=0#gid=0" rel="noopener noreferrer"&gt;https://docs.google.com/spreadsheets/d/1H0wbV_W2PI3GI6BtxkvRgGQkHjQ0FtGpReuN9-ZGzII/edit?gid=0#gid=0&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Portability
&lt;/h3&gt;

&lt;p&gt;Both platforms support full database exports, though neither currently offers tools for exporting application logic or workflows. This creates some level of vendor lock-in, as migrating complex applications out of the platforms would require significant redevelopment effort. The availability of migration tools remains unclear for both platforms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Looking Ahead
&lt;/h3&gt;

&lt;p&gt;Both platforms have clear development roadmaps addressing their current limitations. Momen is actively developing frontend reusability features expected by end of 2024 and plans to begin mobile development in 2025. Bubble continues to mature its mobile offerings and is working on editor modernization.&lt;/p&gt;

&lt;h2&gt;
  
  
  My own biased 2c
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6o3abykmhqi5ufker2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6o3abykmhqi5ufker2n.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
I think more than likely Momen is going to the better platform in the future not just because I own it, but because its &lt;a href="https://momen.app/blogs/cost-efficient-no-code-full-stack-platform/" rel="noopener noreferrer"&gt;cleaner architecture&lt;/a&gt;, its openness to standards and its openness to code. The latter two means it will be better at leveraging existing assets, and code assets are going to be more plentiful than no-code ones in the foreseeable future. Cleaner and more advanced architecture should also enable Momen to move forward at a faster pace.&lt;/p&gt;

</description>
      <category>startup</category>
      <category>webdev</category>
      <category>tooling</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Building a Cost-Efficient No-Code Full-Stack Webapp Development Platform</title>
      <dc:creator>Yaokai Jiang</dc:creator>
      <pubDate>Wed, 09 Oct 2024 10:10:54 +0000</pubDate>
      <link>https://dev.to/momen_hq/building-a-cost-efficient-no-code-full-stack-webapp-development-platform-25ig</link>
      <guid>https://dev.to/momen_hq/building-a-cost-efficient-no-code-full-stack-webapp-development-platform-25ig</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;Architect right with performance and cost in mind, and it is entirely feasible. &lt;/p&gt;

&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;Developers and entrepreneurs choose no-code for the promise of time-to-market and low development cost. However, a lingering and often not unfounded worry accompanies this decision, namely, "what if my idea actually took off?". &lt;/p&gt;

&lt;p&gt;I started Momen with this in mind, "this" being a potentially unheathy obsession with efficiency, an obsession I had before I even started Momen. I am a strong believer it is the increase of efficiency that gave us the abundance we now enjoy. So, from the get-go, I architected and budgeted Momen to be cost-efficient and scalable (to an extent, 1M DAU probably). In this article, we’ll break down the strategy and technical choices we’ve made to build a no-code platform that should take you from MVP to 1M DAU, without breaking the bank (for both our clients and us). &lt;/p&gt;




&lt;h2&gt;
  
  
  Foundations of Cost-Efficient Architecture
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Choosing Open Standards and Avoiding Vendor Lock-In
&lt;/h3&gt;

&lt;p&gt;I personally have no axe to grind with proprietary technologies like DynamoDB, Spanner, lambda or whatever cloud providers built. In fact, I think they are great choices in many situations. However, momen should not depend on proprietary technologies. Choosing a no-code platform is already a big commitment, piling on top of it another commitment to a particular cloud provider to me is just too much. I want to keep the options of our clients open as much as possible. So they can "bring-their-own-cloud". Kubernetes, java, spring, postgres, ceph, minio, etc... &lt;/p&gt;

&lt;h3&gt;
  
  
  Leaning on Kubernetes for Deployment
&lt;/h3&gt;

&lt;p&gt;Kubernetes is the backbone of how we manage deployments in a cost-conscious, cloud-agnostic way. Yes, it comes with its own resource consumption, sometimes too much (Think istio and sidecars). Combine it with containerization, and you’re suddenly free to deploy wherever you want, scaling as needed without worrying about vendor-specific APIs or infrastructure quirks. Add StorageClass and StatefulSets into the mix, and now you’ve got persistent storage handled elegantly, no matter the cloud provider.&lt;/p&gt;

&lt;p&gt;Part of the cost of running a cloud dev/host platform is operational cost, and by leveraging kubernetes, we are able to build an automated solution with portability to an extent, greatly reducing cost, especially when it comes to "bring-your-own-cloud" deployments. &lt;/p&gt;




&lt;h2&gt;
  
  
  Optimizing the Database for Cost Efficiency
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Choosing PostgreSQL for the Core Database
&lt;/h3&gt;

&lt;p&gt;PostgreSQL is performant and resource-efficient. We are able to run postgreSQL with a minimum of 0.25 vCPU and 256MB of RAM, while maintaining reasonable performance for most clients (they can scale up their database if needed). This has been crucial to keeping cost down. Although we are still far away from being able to give our clients an RDS-like experience, we were able to offer pre-tuned databases that should satisfy most users' needs. &lt;/p&gt;

&lt;p&gt;PostgreSQL's vast array of extensions also significantly reduce the cost of development for us so we are able to offer more common functionalities to our users without much additional dev. Prime examples are: postGIS and pgvector.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using PostgreSQL as Designed for Optimal Performance
&lt;/h3&gt;

&lt;p&gt;Now, here’s where many developers stumble. PostgreSQL is robust, yes, but if you misuse it, you can still tank your system’s performance. Many solutions abuse features like JSONB or HStore—turning their relational database into a chaotic hybrid of data types. Sure, those features have their uses, but over-relying on them consumes more disk space, increases I/O, gives up referential integrity and messes with the optimizer. The last point is especially note-worthy, as unless the developer is cognizant about the fact that postgreSQL's default statistics mechanism is almost completely unsuited for filtering / joining / sorting on JSONB attributes, and manually create the appropriate statistics, query planning can be completely off and potentially slow queries down by a factor of a thousand or even a million. &lt;/p&gt;

&lt;p&gt;At &lt;a href="https://momen.app" rel="noopener noreferrer"&gt;Momen&lt;/a&gt;, we play by the rules. While we do support JSONB fields and expose them to our users so they can choose to use it, we stick to PostgreSQL’s strengths—tables, columns, indexes—whenever possible, to ensure performance doesn’t degrade as the platform scales. Use the database as it was meant to be used, and it’ll reward you with speed, reliability, and scalability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Efficient Multi-Tenancy with PostgreSQL Schemas
&lt;/h3&gt;

&lt;p&gt;Momen handles multi-tenancy through postgreSQL schemas. Rather than spinning up a new database for each project (which we used to do), we isolate each project within its own schema, also known as namespaces. This lets us multiplex a single database instance among as many as 1000 free projects, all whilst ensuring each project does not see or in any other way interact with a different project's database. &lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL as a Queue
&lt;/h3&gt;

&lt;p&gt;Why bring in more moving parts when you don’t need to? Instead of introducing a separate queuing system like RabbitMQ, we repurpose PostgreSQL tables to act as our queue. We use a combination of SELECT * FROM FOR UPDATE SKIP LOCKED, LISTEN/NOTIFY and dedicated worker threads to construct the queue. Sure, the throughput is not going to be in the millions per second or even hundreds of thousands per second, but most projects have no need for that. We are able to maintain exactly-once semantics for each message, while saving around 1 GB of RAM per database. &lt;/p&gt;

&lt;h3&gt;
  
  
  Leveraging PostgreSQL Extensions for Enhanced Capabilities
&lt;/h3&gt;

&lt;p&gt;As mentioned previously, we integrate deeply with postgreSQL's extensions too:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;pgvector:&lt;/strong&gt; Need similarity search but don't want to introduce a separate vector database? No problem. pgvector lets us handle that directly within PostgreSQL.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostGIS:&lt;/strong&gt; Geospatial queries are heavy by nature. But instead of resorting to a dedicated geospatial database, we integrate PostGIS into PostgreSQL. It handles those queries with efficiency and precision, without inflating costs.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Designing an Efficient Backend for High Performance
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Java + Spring Boot for Backend Services
&lt;/h3&gt;

&lt;p&gt;When it comes to backend services, the Java ecosystem is a tried-and-true solution that provides both reliability and scalability. Performance-wise, Java is also much faster than other popular languages like Python or Ruby. Compared to Node.js, Java is naturally multi-threaded, providing the ability to utilize more than one core without extra scaling logic, which is quite nice when paired with our Kubernetes Resource Control, as it can boost CPU utilization beyond what is allocated as long as all other services are idle. It is true that the JVM has much higher memory overhead compared to Node.js, this is partially mitigated by having multi-tenancy enabled on a single JVM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Forgoing ORM for Direct Database Access
&lt;/h3&gt;

&lt;p&gt;ORMs, standing for Object Relational Mapping, are quite popular choices when it comes to interacting with the database in the server. It has its places but we have decided that such an abstraction is not suitable for no-code runtime, as it adds too much resource consumption in terms of memory, and makes query generation much less predictable than something like jOOQ. Combined with Dataloader, we generate efficient SQL queries, avoiding those annoying n+1 query problems that exist commonly in ORM integrations. &lt;/p&gt;




&lt;h2&gt;
  
  
  Automation and Efficiency Enhancements
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Automating Schema Migration
&lt;/h3&gt;

&lt;p&gt;One of the drawbacks of using relational databases is that we now have to handle database migration. It is actually quite a difficult problem. But with this problem solved, our users can then freely change their database's table structure, including relationships between tables. &lt;/p&gt;

&lt;h3&gt;
  
  
  Automatic Slow Query Detection and Index Creation
&lt;/h3&gt;

&lt;p&gt;Indexes are crucial to ensuring performance and simultaneously reducing cost for any reasonably-sized projects. On the flip side, over-indexing can reduce performance and increase cost as updates, planning, vacuuming all become more expensive. Most of the time, appropriate indexing is a skill that is beyond the reach of most developers, be they code or no-code. At momen, we automate the detection of slow queries and have a system in place to automatically generate indexes where needed, taking that burden off developer's shoulders and ensuring your apps are performant and cost-efficient. &lt;/p&gt;

&lt;h3&gt;
  
  
  Automated Generation of Pre-Signed URLs for CDN/S3
&lt;/h3&gt;

&lt;p&gt;File uploads are typically done in two ways. Either use the server as a relay, or direct upload to S3 from client. Server relay is more costly, has the potential to create bottlenecks, but offers more control. At Momen, we decide to bypass the server. We ensure access control by generating pre-signed URLs on the server. Users then use that pre-signed URL to upload files directly to the storage service. &lt;/p&gt;

&lt;h3&gt;
  
  
  Frontend Dependency Detection
&lt;/h3&gt;

&lt;p&gt;One of the most important optimizations we’ve made is in how we fetch data from the backend. Using automated dependency detection, the runtime frontend only requests the fields it needs for rendering—nothing more, nothing less. This cuts down on excess data transfers, reduces the query load on the database, and ensures a faster user experience. Multiple levels of related data can be fetched in one-go, reducing both round-trip time and connection overhead on the client as well as the server. &lt;/p&gt;




&lt;h2&gt;
  
  
  Advanced Logging, Monitoring, and Type-Checking
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Using a Dedicated Rust Backend for Logging and Monitoring
&lt;/h3&gt;

&lt;p&gt;For logging and monitoring, we’ve turned to Rust, a language that excels in high-performance, low-latency applications. By dedicating a Rust backend to handle logging and monitoring tasks, we minimize the impact on the core system while still gathering crucial insights. A separate postgreSQL database is used to store and analyze this data instead of more common logging solutions like ElasticSearch, as postgreSQL is more than enough for our users' scale while being orders of magnitude cheaper to run compared to in-memory solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Type-Checking in the Browser
&lt;/h3&gt;

&lt;p&gt;Keeping type-checking on the user's browser rather than relying on server-side processing greatly reduces server cost. While the actual compute cost may not be excessively high, in order to reliably check for type, we need to load most of the project into memory (the equivalent of source code). This consumes a lot of memory, and poses challenges as to when we can unload them. Since the project is already loaded in user's browser's memory, keeping the type-checking logic there not only reduces the load on our servers, but also speeds up the development cycle by providing instant feedback to developers. I'd call it a lightweight, distributed approach that improves both performance and developer productivity.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Path to Sustainable Scalability
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Long-Term Cost Benefits of Efficient Engineering
&lt;/h3&gt;

&lt;p&gt;It’s tempting to cut corners early on, but investing in smart, efficient engineering pays off tenfold in the long run. Our philosophy at Momen.app is to build once—build well. We are not perfect by our own standards, but that is what we try to achieve. By making strategic architectural decisions upfront, we can avoid the pitfalls of scaling that many no-code platforms encounter. This results in a platform that can grow without spiraling costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost-Effective and Scalable No-Code Development
&lt;/h3&gt;

&lt;p&gt;Here’s the bottom line: No-code platforms can be scalable and cost-effective, but only if you put the right architecture and engineering practices in place. It is no different to any software. At Momen.app, we are to prove that with the right mix of open standards, automation, and efficient design, you can deliver powerful no-code solutions that don’t buckle under pressure—ensuring that both the platform and its users thrive.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building a no-code platform that balances cost and scalability is no easy feat, but it’s entirely possible with the right strategy. By investing in the right tools, choosing the right architectures, and leveraging automation, we will continue to create a system that can grow with our users without runaway costs. No-code doesn’t have to mean inefficient—it just takes smart engineering, and a little bit of foresight.&lt;/p&gt;

&lt;p&gt;At &lt;a href="https://momen.app" rel="noopener noreferrer"&gt;Momen&lt;/a&gt;, we’re ready for the future, and our clients are too.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>development</category>
      <category>nocode</category>
    </item>
    <item>
      <title>Why You Need (More Than) Locks: Ensuring Consistency with Transactions in Your No-Code Applications</title>
      <dc:creator>Yaokai Jiang</dc:creator>
      <pubDate>Wed, 25 Sep 2024 10:36:57 +0000</pubDate>
      <link>https://dev.to/momen_hq/why-you-need-more-than-locks-ensuring-consistency-with-transactions-in-your-no-code-applications-kkj</link>
      <guid>https://dev.to/momen_hq/why-you-need-more-than-locks-ensuring-consistency-with-transactions-in-your-no-code-applications-kkj</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;TL;DR&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Consistency is vital in no-code development for user trust. Many platforms struggle with data reliability due to locking issues. Momen.app uses PostgreSQL’s Multi-Version Concurrency Control (MVCC) to ensure data integrity, allowing developers to focus on building features without worrying about consistency problems.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Consistency Matters in No-Code Development&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;No-code development is often associated with feature delivery, problem solving and quick realization of business value. And it should be, after all, time-to-market, is probably one of the most important reasons one would choose to use no-code in the first place. But underneath every app lies one critical need: consistent, reliable data. Users' trust can quickly be eroded when they observe inconsistent, illogical or missing data. &lt;/p&gt;

&lt;p&gt;Most no-code platforms sacrifice consistency in the name of more flexibility and ease of development. Then no-code developers try to get some of that consistency and predictability back using homebrewed "locks"—mechanisms that stop multiple processes from accessing the same data at the same time. While this sounds like a great solution, it turns out locks alone aren’t enough to ensure data consistency. In this article, we'll explore why, as a no-code developer, you need more than locks to keep your data safe, and how by properly exposing abstractions PostgreSQL (a popular database) provides, Momen.app ensures the right level of protection without extra work on your part.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What are Locks?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At its core, a lock is a simple concept: it prevents two processes (or users) from accessing (reading or writing) the same piece of data simultaneously. Think of it like locking a door—you have exclusive control over the room while the door is locked, and no one else can come in until you’re done. In computer science lingo, this is also known as a "mutex". &lt;/p&gt;

&lt;p&gt;Fundamentally, locks are not a database idea, but they are often used in databases. When used properly, it can eliminate a lot of data consistency issues. Though in complex applications, relying on locks alone can still lead to data inconsistencies, inefficiencies, or even errors.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;When Locks Work as Intended&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Locks work when their functionality is limited to mutual exclusion alone. For example, inventory management.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Obtain a mutex on a row that represents a particular SKU&lt;/li&gt;
&lt;li&gt;Check the remaining inventory represented as a field in that row, ensuring it is larger than the number to deduct&lt;/li&gt;
&lt;li&gt;Subtract remaining inventory by the number of items to be sold&lt;/li&gt;
&lt;li&gt;Update the row with new inventory&lt;/li&gt;
&lt;li&gt;Release the mutex&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this example, as long as ALL access in EVERY part of your application to the SKU table follows this sequence of action, it is reasonable to expect consistency to be maintained. &lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;When Locks Fall Short: Common Issues You Might Face&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;However, even with that same inventory management example, there are plenty of potential pitfalls. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Partial Isolation&lt;/strong&gt;: What if not "ALL" access was protected by the same mutex (you forgot to follow the strict protocol somewhere in your application)? Now you may have two pieces of logic running simultaneously (called threads) both seeing inventory for the same SKU being 1, and both try to deduct 1 from it. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deadlocks&lt;/strong&gt;: Deadlocks happen when two users try to access resources in a circular way—each waiting for the other to finish. Imagine Thread A tries to lock SKU 1 then SKU 2, and Thread B tries to lock SKU 2 and then 1. Well now they are both stuck waiting for the other to finish, but that will never happen. In addition to prevention, you need to build a mechanism to detect deadlocks in order to break them. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Granularity Issues&lt;/strong&gt;: Locking too much or too little can also be a problem. If you lock an entire table when only modifying a single row, other users have to wait unnecessarily, significantly reducing the system's throughput. In the worst case, 99.99% reduction is not unheard of. Also, do you lock for both read and write or just write?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lack of Atomicity&lt;/strong&gt;: What if half way through the sequence of actions that the system is supposed to perform, something unexpected arises? That "something" can range from as drastic as power outage to as minor as a logical bug. Now what do you do with the rows you have already modified? Do you have to manually build out a rollback mechanism that handles unexpected exits at every stage of every sequence? That would be both tedious and error-prone. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lack of Constraints&lt;/strong&gt;: Business logic sometimes dedictates that a particular field's value must be unique amongst all rows in a table (unique constraint), or a particular field's value can only be one of the values in a field in a different table (foreign key constraint). Imagine an automatic SKU entry process where packages must be scanned individually and recorded. You may have multiple scanners pointing at a conveyor belt from different angles to capture the barcode written on the package regardless of the orientation of the package, which means potentially more than one scanner will be reading the barcode on that package simultaneously. Most no-coders will implement this as a two step process: check for the existence of that SKU in the database, if it does not exist, then write it in. Locks on a particular row do not work here because there is no row to lock. So you may end up with the two scanners simultaneously think that package is not yet recorded according to the "check" step, and then both writing that value into the table. Now you have two rows in the table representing the same SKU, a breakdown of logical unique constraint. Similar, if not more insidious inconsistencies can arise with the breakdown of foreign key constraints. &lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;How Momen.app Address These Issues&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Fortunately, Momen.app properly leverages PostgreSQL, which employs Transactional MVCC (Multi-Version Concurrency Control), a more advanced approach to managing data consistency. MVCC lets multiple users read and write data simultaneously without locking the whole system by maintaining multiple versions of the data for reads and writes instead of using locks. By exposing the transactional MVCC semantics in actionflow / batch mutations and letting users directly / indirectly define database-native constraints, Momen.app solves many of the shortfalls of no-coders homebrewing lock mechanisms. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Partial Isolation&lt;/strong&gt;: Because MVCC is the default behavior, within the actionflows (which run entirely on the backend and are transactional) there is no chance of "forgetting" to lock something. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deadlocks&lt;/strong&gt;: While MVCC can’t eliminate all deadlocks, it significantly reduces the need for locking by allowing transactions to operate on different versions of the data. Also PostgreSQL has deadlock detection / abort mechanisms built-in, whose reliability has stood the test of decades of production use. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Granularity Issues&lt;/strong&gt;: MVCC avoids the need to lock entire tables by letting each transaction work on its own version of the data, preventing unnecessary delays while ensuring the data remains consistent, giving you best of both worlds (throughput AND consistency)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lack of Atomicity&lt;/strong&gt;: Operations are all-or-nothing. Either everything in a transaction happens, or nothing happens, avoiding partial updates. This again is provided out of the box, removing the burden of implementing your own rollback mechanism. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lack of Constraints&lt;/strong&gt;: Before any other threads can see your modifications (committing), all constraints such as unique / foreign key are checked, ensuring they are always honored. In the scenario where two barcode scanners might read the same package and both attempt to write the same SKU simultaneously, in momen, the "two identical SKU" problem can be entirely avoided entirely by performing INSERT on unique conflict (barcode) DO NOTHING. So you can rest assured that your SKU table always contains unique SKUs!&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Focus on Building, Let Us Handle Consistency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As a no-code developer using Momen.app, you can focus on building the best features for your users while trusting that the underlying systems are handling the tough problems of data consistency. With Momen.app leveraging PostgreSQL's advanced transactional model and MVCC, you're protected from partial isolation, deadlocks, excessive-locking-induced bad performance, partial updates and race-conditions-induced breakdown of logical constraints—so you can keep creating.&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>powerfuldevs</category>
    </item>
    <item>
      <title>In-Depth Analysis of Search Strategies for Momen</title>
      <dc:creator>Yaokai Jiang</dc:creator>
      <pubDate>Fri, 20 Sep 2024 11:09:35 +0000</pubDate>
      <link>https://dev.to/momen_hq/in-depth-analysis-of-search-strategies-for-momen-3ebm</link>
      <guid>https://dev.to/momen_hq/in-depth-analysis-of-search-strategies-for-momen-3ebm</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;Install pg_trgm extension, use a GIN index and add an exclude constraint using hash if needed. GIN with pg_trgm will get you indexed operation on 'LIKE', 'ILIKE', 'SIMILAR TO' and '~', while the exclude constraint using hash gives you both uniqueness and super fast point selects. &lt;/p&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://momen.app" rel="noopener noreferrer"&gt;Momen&lt;/a&gt; is a cloud-native no-code IDE (integrated development environment) that we here at Momen have been developing since early 2020. It aims to significantly lower the hurdle one has to overcome to build an application that is capable of meeting the requirements of serving requests from a non-insignificant crowd. A very common requirement is text search. I would like to share some of my thoughts on this requirement.&lt;/p&gt;




&lt;h2&gt;
  
  
  Breaking Down the Requirement
&lt;/h2&gt;

&lt;p&gt;A lot is implied when our customers ask the question: "Do you support search?". After some digging, I narrowed them down to the following.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exact matches. e.g. SELECT * FROM account WHERE phone_number = '12312345678'. This is the easiest type to satisfy. &lt;/li&gt;
&lt;li&gt;Prefix matches. e.g. SELECT * FROM post WHERE post.title ILIKE 'Momen%'. As we shall see later, these are quite easy as well. &lt;/li&gt;
&lt;li&gt;Suffix matches. e.g. SELECT * FROM post WHERE post.title ILIKE '%Momen'. These look similar to prefix matches, and indeed are quite similar.&lt;/li&gt;
&lt;li&gt;Infix matches. e.g. SELECT * FROM post WHERE post.title ILIKE '%Momen%'. These are quite a bit more difficult to deal with, and require a different strategy entirely. &lt;/li&gt;
&lt;li&gt;Regular-expression matches. e.g. SELECT * FROM post WHERE post.title ~ '(Momen|mirror)'. Yeah, a tough nut to crack.&lt;/li&gt;
&lt;li&gt;Full-text searches. e.g. SELECT * FROM post WHERE post.content @@ to_tsquery('Momen &amp;amp; mirror'). (Postgres syntax) This one is especially difficult to deal with, and we shall reserve this for a separate post. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition to those "search" requirements, our customers frequently use "unique" constraint on the content of the text. This one although seems innocuous on the surface, can become a huge pain under certain circumstances. &lt;/p&gt;




&lt;h2&gt;
  
  
  Relevant Background Information
&lt;/h2&gt;

&lt;p&gt;Our product mirror is opinionated. We are a start-up team. We are very cognizant of the resources that we have, and being opinionated allows us to not be boggled down by the innumerable combinations of technology choices at every level of the stack. We also wanted to have as few components as possible while meeting our customers' needs. &lt;/p&gt;

&lt;p&gt;Our choice for database is Postgres, and our API was initially generated by hasura. Hasura is a GraphQL backend that essentially provides a pass-through SQL interface, and does not offer the ability to inject custom logic for specific types of queries. That means whatever we decide to do to optimize the SQL queries generated by all those different kinds of "searches". &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwzyp54ifqv5bk03y2qw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwzyp54ifqv5bk03y2qw.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Route to Optimization
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;It is not scientific, but should be representative. &lt;br&gt;
Surface Book 2, i7-8650U + 16GB + 512GB. &lt;br&gt;
Postgres 14 installed on Windows' NTFS (not WSL2).&lt;br&gt;
Populated via&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE momen ENCODING 'UTF-8';
CREATE TABLE simple (id serial PRIMARY KEY, content TEXT);
INSERT INTO simple (content) SELECT gen_random_uuid() FROM generate_series(1, 1000000);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Exact Matches
&lt;/h3&gt;

&lt;p&gt;This one is simple enough. All it takes is to create any kind of index that supports equality test. &lt;br&gt;
Before any optimization, let's establish the baseline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# explain analyze select * from simple where content = 'f0f453fd-8f7c-44f2-bcc0-124f706786cf';
QUERY PLAN
-------------------------------------------------------------------------
Gather  (cost=1000.00..15554.43 rows=1 width=41) (actual time=144.976..172.778 rows=0 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   -&amp;gt;  Parallel Seq Scan on simple  (cost=0.00..14554.33 rows=1 width=41) (actual time=35.305..35.305 rows=0 loops=3)
         Filter: (content = 'f0f453fd-8f7c-44f2-bcc0-124f706786cf'::text)
         Rows Removed by Filter: 333333
 Planning Time: 1.331 ms
 Execution Time: 172.798 ms
(8 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the parallel sequential scan. Now let's just create a vanilla B-tree index on content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# CREATE INDEX idx_simple_by_content ON simple (content);
momen=# explain analyze select * from simple where content = 'f0f453fd-8f7c-44f2-bcc0-124f706786cf';
QUERY PLAN
-------------------------------------------------------------------------
Index Scan using idx_simple_by_content_b_tree on simple  (cost=0.42..8.44 rows=1 width=41) 
(actual time=0.287..0.288 rows=1 loops=1)
Index Cond: (content = 'f0f453fd-8f7c-44f2-bcc0-124f706786cf'::text)
Planning Time: 0.394 ms
Execution Time: 0.323 ms
(4 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We now have an index scan. And execution time goes from hundreds of ms to hundreds of us, about a 1000x difference on this 1 million row dataset. &lt;/p&gt;

&lt;h3&gt;
  
  
  Prefix Matches
&lt;/h3&gt;

&lt;p&gt;A bit more interesting obviously, so let's try these with the previously created B-tree index on content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# explain analyze select * from simple where content like 'f0f453fd-8f7c-44f2-bcc0-124f706786cf%';
QUERY PLAN
-------------------------------------------------------------------------
Gather  (cost=1000.00..15564.33 rows=100 width=41) (actual time=0.528..100.149 rows=1 loops=1)
Workers Planned: 2
Workers Launched: 2
-&amp;gt;  Parallel Seq Scan on simple  (cost=0.00..14554.33 rows=42 width=41) (actual time=26.435..52.645 rows=0 loops=3)
Filter: (content ~~ 'f0f453fd-8f7c-44f2-bcc0-124f706786cf%'::text)
Rows Removed by Filter: 333333
Planning Time: 1.960 ms
Execution Time: 100.194 ms
(8 rows)
Odd, we are not going through index at all. We ended up with a parallel sequential scan. One would think prefix matches should be indexable, since we can quite simply express this as a conjunction of two range conditions, and B-trees are perfect for those queries. So let's try that. 
momen=# explain analyze select * from simple where content &amp;gt;= 'f0f453fd-8f7c-44f2-bcc0-124f706786cf' and content &amp;lt; 'f0f453fd-8f7c-44f2-bcc0-124f706786cg';
QUERY PLAN
-------------------------------------------------------------------------
Index Scan using idx_simple_by_content_b_tree on simple  (cost=0.42..8.45 rows=1 width=41) (actual time=0.113..0.113 rows=1 loops=1)
Index Cond: ((content &amp;gt;= 'f0f453fd-8f7c-44f2-bcc0-124f706786cf'::text) AND (content &amp;lt; 'f0f453fd-8f7c-44f2-bcc0-124f706786cg'::text))
Planning Time: 0.167 ms
Execution Time: 0.128 ms
(4 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Amazing! The index is performing its magic tricks again! So it appears that Postgres wasn't smart enough to figure out this transformation. Oh, what happened to SQL's promise of "let users express what data to retrieve, let the engine underneath take care of seamlessly retrieving it"? It turns out that the reason behind Postgres' "inability" to translate "LIKE 'value%'" was due to the limitations of the &amp;gt;= and &amp;lt; operators. Both only operate on a byte value level, and once you have some collation other than "C", things can breakdown very quickly. So that "inability" was really Postgres' attempt to protect us from ourselves. The correct operators to use turns out to be "~&amp;gt;=~" and "~&amp;lt;~". Let's bask in the light of some magic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# create index idx_simple_by_content_b_tree_text_pattern_ops on simple (content text_pattern_ops);
momen=# explain analyze select * from simple where content like 'f0f453fd-8f7c-44f2-bcc0-124f706786cf%';
QUERY PLAN
-------------------------------------------------------------------------
Index Scan using idx_simple_by_content_b_tree_text_pattern_ops on simple  (cost=0.42..2.65 rows=100 width=41) (actual time=0.041..0.042 rows=1 loops=1)
Index Cond: ((content ~&amp;gt;=~ 'f0f453fd-8f7c-44f2-bcc0-124f706786cf'::text) AND (content ~&amp;lt;~ 'f0f453fd-8f7c-44f2-bcc0-124f706786cg'::text))
Filter: (content ~~ 'f0f453fd-8f7c-44f2-bcc0-124f706786cf%'::text)
Planning Time: 1.785 ms
Execution Time: 0.053 ms
(5 rows)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That "text_pattern_ops" is all the "nudge" Postgres needed. "text_pattern_ops" is an "opclass". "The operator class identifies the operators to be used by the index for that column. ", says Postgres' &lt;a href="https://www.postgresql.org/docs/14/indexes-opclass.html" rel="noopener noreferrer"&gt;official document&lt;/a&gt;. Usually we do not specify one, and that just means that Postgres will choose the default operator class for that column's data type. In addition to =, the default operator class supports the following operations: &amp;lt;, &amp;lt;=, &amp;gt;, and &amp;gt;=. And that was why the "LIKE" operator that operates on two text values, which is implemented by Postgres' "~ ~", wasn't getting the proper treatment. On line 6, we can clearly see that the index condition involves the text version of "&amp;gt;=" and "&amp;lt;", namely "~&amp;gt;=~" and "~&amp;lt;~". And Postgres does indeed translate the "~ ~" condition for us! &lt;/p&gt;

&lt;p&gt;A natural follow up would be, what are all the operators supported by "text_pattern_ops"?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# SELECT am.amname AS index_method,
       opf.opfname AS opfamily_name,
       amop.amopopr::regoperator AS opfamily_operator
  FROM pg_am am,
       pg_opfamily opf,
       pg_amop amop
 WHERE opf.opfmethod = am.oid AND amop.amopfamily = opf.oid
       AND opf.opfname = 'text_pattern_ops';
 index_method |  opfamily_name   | opfamily_operator
--------------+------------------+-------------------
 btree        | text_pattern_ops | ~&amp;lt;~(text,text)
 btree        | text_pattern_ops | ~&amp;lt;=~(text,text)
 btree        | text_pattern_ops | =(text,text)
 btree        | text_pattern_ops | ~&amp;gt;=~(text,text)
 btree        | text_pattern_ops | ~&amp;gt;~(text,text)
 hash         | text_pattern_ops | =(text,text)
(6 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might have guessed, since we dropped the index that supported &amp;lt;, &amp;lt;=, &amp;gt;, and &amp;gt;= operators, some queries now suffer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# explain analyze select * from simple where content &amp;gt;= 'f0f453fd-8f7c-44f2-bcc0-124f706786cf' and content &amp;lt; 'f0f453fd-8f7c-44f2-bcc0-124f706786cg';
QUERY PLAN
-------------------------------------------------------------------------
Gather  (cost=1000.00..16596.10 rows=1 width=41) (actual time=0.648..204.607 rows=1 loops=1)
Workers Planned: 2
Workers Launched: 2
-&amp;gt;  Parallel Seq Scan on simple  (cost=0.00..15596.00 rows=1 width=41) (actual time=89.163..148.741 rows=0 loops=3)
 Filter: ((content &amp;gt;= 'f0f453fd-8f7c-44f2-bcc0-124f706786cf'::text) AND (content &amp;lt; 'f0f453fd-8f7c-44f2-bcc0-124f706786cg'::text))
 Rows Removed by Filter: 333333
Planning Time: 0.586 ms
Execution Time: 204.665 ms
(8 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And yes, now we are back to sequential scans. Though one could very reasonably argue, few requirements would involve byte-wise string range lookups. And you could always choose to keep the original B-tree index. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcuqnjubo51psge95b2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcuqnjubo51psge95b2z.png" alt=" " width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Suffix Matches
&lt;/h3&gt;

&lt;p&gt;Now we know what operators "text_pattern_ops" support, it is not difficult to come to the conclusion that we can't express LIKE '%something' solely in terms of the text version of less-than, less-than-or-equal-to, equal-to, greater-than and greater-than-or-equal-to. (Give it a try if you don't believe me).&lt;br&gt;
And that is reflected Postgres' query plan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# explain analyze select * from simple where content like '%f0f453fd-8f7c-44f2-bcc0-124f706786cf';
QUERY PLAN
-------------------------------------------------------------------------
Gather  (cost=1000.00..14388.26 rows=100 width=41) (actual time=0.599..178.746 rows=1 loops=1)
Workers Planned: 3
Workers Launched: 3
-&amp;gt;  Parallel Seq Scan on simple  (cost=0.00..13378.26 rows=32 width=41) (actual time=27.477..52.040 rows=0 loops=4)
Filter: (content ~~ '%f0f453fd-8f7c-44f2-bcc0-124f706786cf'::text)
Rows Removed by Filter: 250000
Planning Time: 0.114 ms
Execution Time: 178.763 ms
(8 rows)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As expected, we have reverted back to sequential scan. However, once we reverse the string, "%query" becomes effectively "query%". So let's go that route.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# create index idx_simple_by_reverse_content_b_tree_text_pattern_ops on simple (reverse(content) text_pattern_ops);
momen=# explain analyze select * from simple where reverse(content) like 'fc687607f421-0ccb-2f44-c7f8-df354f0f%';                               QUERY PLAN
-------------------------------------------------------------------------
Bitmap Heap Scan on simple  (cost=92.38..4250.57 rows=5000 width=41) (actual time=0.023..0.023 rows=1 loops=1)
Filter: (reverse(content) ~~ 'fc687607f421-0ccb-2f44-c7f8-df354f0f%'::text)
Heap Blocks: exact=1
-&amp;gt;  Bitmap Index Scan on idx_simple_by_reverse_content_b_tree_text_pattern_ops  (cost=0.00..91.13 rows=5000 width=0) (actual time=0.015..0.015 rows=1 loops=1)
Index Cond: ((reverse(content) ~&amp;gt;=~ 'fc687607f421-0ccb-2f44-c7f8-df354f0f'::text) AND (reverse(content) ~&amp;lt;~ 'fc687607f421-0ccb-2f44-c7f8-df354f0g'::text))
Planning Time: 0.162 ms
Execution Time: 0.047 ms
(7 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And yes, as expected, we have the beautiful performance afforded to us by the lovely index. &lt;/p&gt;

&lt;h3&gt;
  
  
  Infix Matches
&lt;/h3&gt;

&lt;p&gt;Looking at infix matches, it is not possible to translate "LIKE '%query%'" in terms of any combination of "~&amp;gt;~" and "~&amp;lt;~" operators. What we now need is proper "~ ~" operator support natively in the index. To find such an index type, we need to first find the opclasses that support the "~ ~" operator, and then find the index types compatible with such opclasses.&lt;br&gt;
It turns out that none of Postgres' default opclasses support "~ ~(text, text)", and one can run the following sql to verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT am.amname AS index_method,
   opf.opfname AS opfamily_name,
   amop.amopopr::regoperator AS opfamily_operator
FROM pg_am am,
   pg_opfamily opf,
   pg_amop amop
WHERE opf.opfmethod = am.oid AND amop.amopfamily = opf.oid
   AND amop.amopopr::regoperator = '~~(text, text)'::regoperator;
However, an extension that is by default available in Postgres, called "pg_trgm", supports "~~(text, text)". As the name suggests, this extension gives Postgres some trigram-related abilities. So let us install that. 
momen=# create extension pg_trgm;
#CREATE EXTENSION
momen=# SELECT am.amname AS index_method,
      # opf.opfname AS opfamily_name,
      # amop.amopopr::regoperator AS opfamily_operator
 # FROM pg_am am,
 #      pg_opfamily opf,
 #      pg_amop amop
 # WHERE opf.opfmethod = am.oid AND amop.amopfamily = opf.oid
 #      AND amop.amopopr::regoperator = '~~(text, text)'::regoperator;
 index_method | opfamily_name | opfamily_operator
--------------+---------------+-------------------
 gist         | gist_trgm_ops | ~~(text,text)
 gin          | gin_trgm_ops  | ~~(text,text)
(2 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we have two newly available options, using GIST + gist_trgm_ops or GIN + gin_trgm_ops, let's try out both.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# create index idx_simple_by_content_gist_trgm_ops on simple USING GIST (content gist_trgm_ops);
CREATE INDEX
Time: 28290.219 ms (00:28.290)
momen=# explain analyze select * from simple where content like '%fc687607f421-0ccb-2f44-c7f8-df354f0f%';
QUERY PLAN
-------------------------------------------------------------------------
 Index Scan using idx_simple_by_content_gist_trgm_ops on simple  (cost=0.41..115.46 rows=100 width=41) (actual time=10.810..10.811 rows=0 loops=1)
   Index Cond: (content ~~ '%fc687607f421-0ccb-2f44-c7f8-df354f0f%'::text)
 Planning Time: 1.145 ms
 Execution Time: 10.829 ms
(4 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# explain analyze select * from simple where content like '%0ccb%';
 QUERY PLAN
-------------------------------------------------------------------------
Index Scan using idx_simple_by_content_gist_trgm_ops on simple  (cost=0.41..115.46 rows=100 width=41) (actual time=1.774..104.431 rows=256 loops=1)
Index Cond: (content ~~ '%0ccb%'::text)
Rows Removed by Index Recheck: 24
Planning Time: 1.793 ms
Execution Time: 104.460 ms
(5 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# create index idx_simple_by_content_gin_trgm_ops on simple USING GIN (content gin_trgm_ops);
CREATE INDEX
Time: 18051.048 ms (00:18.051)
momen=# explain analyze select * from simple where content like '%fc687607f421-0ccb-2f44-c7f8-df354f0f%';
QUERY PLAN
-------------------------------------------------------------------------------------
Bitmap Heap Scan on simple  (cost=200.98..311.19 rows=100 width=41) (actual time=11.378..11.379 rows=0 loops=1)
Recheck Cond: (content ~~ '%fc687607f421-0ccb-2f44-c7f8-df354f0f%'::text)
-&amp;gt;  Bitmap Index Scan on idx_simple_by_content_gin_trgm_ops  (cost=0.00..200.95 rows=100 width=0) (actual time=11.375..11.375 rows=0 loops=1)
Index Cond: (content ~~ '%fc687607f421-0ccb-2f44-c7f8-df354f0f%'::text)
Planning Time: 0.294 ms
Execution Time: 11.505 ms
(6 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# explain analyze select * from simple where content like '%0ccb%';
QUERY PLAN
-------------------------------------------------------------------------
Bitmap Heap Scan on simple  (cost=12.88..123.09 rows=100 width=41) (actual time=0.472..0.863 rows=256 loops=1)
Recheck Cond: (content ~~ '%0ccb%'::text)
Rows Removed by Index Recheck: 24
Heap Blocks: exact=279
-&amp;gt;  Bitmap Index Scan on idx_simple_by_content_gin_trgm_ops  (cost=0.00..12.85 rows=100 width=0) (actual time=0.421..0.421 rows=280 loops=1)
Index Cond: (content ~~ '%0ccb%'::text)
Planning Time: 0.373 ms
Execution Time: 0.923 ms
(8 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Looks like GIST is slower to build than GIN, and much slower on shorter search patterns. So slow that GIST is only getting less than 2x performance compared to a parallel sequential scan on this 1M row table. In fact, GIST index takes more space, too, due to its tree structure. 
momen=# SELECT
    psai.indexrelname                              AS index_name,
    pg_size_pretty(pg_relation_size(c.oid))        AS table_size,
    pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size
FROM
    pg_tables t
LEFT JOIN pg_class c ON t.tablename = c.relname
LEFT JOIN pg_index i ON c.oid = i.indrelid
LEFT JOIN pg_stat_all_indexes psai ON i.indexrelid = psai.indexrelid
WHERE
    t.schemaname NOT IN ('pg_catalog', 'information_schema')
ORDER BY 1;
 index_name                                        |table_size|index_size
-------------------------------------------------------+------------+----
 idx_simple_by_content_b_tree                       | 73 MB      | 56 MB
 idx_simple_by_content_b_tree_text_pattern_ops      | 73 MB      | 56 MB
 idx_simple_by_content_gin_trgm_ops                 | 73 MB      | 121 MB
 idx_simple_by_content_gist_trgm_ops                | 73 MB      | 200 MB
 idx_simple_by_reverse_content_b_tree_text_pattern_ops| 73 MB    | 56 MB
 simple_pkey                                        | 73 MB      | 21 MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So why do we even bother with GIST? Insert/update speed. Due to the "inverted index" nature, each insertion/update can trigger updates to a huge number of pages, significantly slowing the update speed. Further discussion of GIN indexes deserves its own post. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumy5b7jswpykn97en5ql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumy5b7jswpykn97en5ql.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# INSERT INTO simple_gin_only
SELECT nextval('simple_gin_only_id_seq'), gen_random_uuid() FROM generate_series(1, 100000);
INSERT 0 100000
Time: 2761.322 ms (00:02.761)
momen=# INSERT INTO simple_gist_only
SELECT nextval('simple_gist_only_id_seq'), gen_random_uuid() FROM generate_series(1, 100000);
INSERT 0 100000
Time: 4084.976 ms (00:04.085)
momen=# INSERT INTO simple_gin_only
SELECT nextval('simple_gin_only_id_seq'), 
gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text FROM simple order by id limit 10000;
INSERT 0 10000
Time: 2669.833 ms (00:02.670)
momen=# INSERT INTO simple_gist_only
SELECT nextval('simple_gist_only_id_seq'), gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text FROM simple order by id limit 10000;
INSERT 0 10000
Time: 1263.832 ms (00:01.264)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What we observe is somewhat consistent with Postgres 9.4's documentation on this topic. As the size of the text grows, GIN's performance degradation is much more severe. So this now becomes a trade-off, is your data read-heavy or write-heavy?&lt;br&gt;
Both GIN and GIST support a ton of text operations, and it seems other than size and insert/update performance, there now seems no particular reason to rely on B-tree indexes for text fields.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# select * into simple_btree_only from simple;
momen=# create index idx_simple_btree_only_by_content on simple_btree_only using btree (content text_pattern_ops);
CREATE INDEX
momen=# INSERT INTO simple_btree_only
SELECT nextval('simple_btree_only_id_seq'), gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text || gen_random_uuid()::text FROM simple order by id limit 10000;
INSERT 0 10000
Time: 412.707 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Regular-Expression Matches
&lt;/h3&gt;

&lt;p&gt;Remember when we said "both GIN and GIST support a ton of text operations"? Well the payback is quick on this one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# explain analyze select * from simple_gin_only where content ~ 'ba1.*abc';
QUERY PLAN
-------------------------------------------------------------------------
Bitmap Heap Scan on simple_gin_only  (cost=249.65..398.44 rows=135 width=41) (actual time=2.126..2.338 rows=37 loops=1)
Recheck Cond: (content ~ 'ba1.*abc'::text)
Rows Removed by Index Recheck: 28
Heap Blocks: exact=65
-&amp;gt;  Bitmap Index Scan on idx_simple_gin_only_by_content_gin_trgm_ops  (cost=0.00..249.61 rows=135 width=0) (actual time=2.106..2.106 rows=83 loops=1)
Index Cond: (content ~ 'ba1.*abc'::text)
Planning Time: 0.454 ms
Execution Time: 2.384 ms
(8 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# explain analyze select * from simple_gist_only where content ~ 'ba1.*abc';
QUERY PLAN
-------------------------------------------------------------------------
Index Scan using idx_simple_gist_only_by_content_gist_trgm_ops on simple_gist_only  (cost=0.41..156.79 rows=136 width=41) (actual time=7.026..264.619 rows=42 loops=1)
Index Cond: (content ~ 'ba1.*abc'::text)
Rows Removed by Index Recheck: 28
Planning Time: 0.230 ms
Execution Time: 264.655 ms
(5 rows)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Again, GIST seems like a rather clear loser, though still performing better than sequential scan. So the answer would most likely be, "just use a GIN with trigrams". &lt;/p&gt;

&lt;h3&gt;
  
  
  Uniqueness Constraint
&lt;/h3&gt;

&lt;p&gt;Neither GIN nor GIST indexes can be used to support a unique constraint. In fact, according to Postgres' own documentation, "currently, only B-tree indexes can be declared unique."&lt;br&gt;
That is usually not too much of a problem, but what about a huge string that doesn't fit in a single page? All B-tree entries must fit inside a page, which in most cases, is 8kB in size. And if we try to create a (unique) index on a column that does contain large strings greater than 8kB in size, we are greeted with the following error message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# insert into simple_btree_only values (nextval('simple_btree_only_id_seq'), repeat('a', 100000000));
ERROR:  index row requires 1144712 bytes, maximum size is 8191
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But there are ways out. We can shorten the content inside a B-tree index by first hashing them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# CREATE UNIQUE INDEX udx_simple_btree_only_by_content_hash ON simple_btree_only USING btree (digest(content, 'sha512'));
CREATE INDEX
momen=# insert into simple_btree_only values (1, 'f0f453fd-8f7c-44f2-bcc0-124f706786cf');
ERROR:  duplicate key value violates unique constraint "udx_simple_btree_only_by_content_hash"
DETAIL:  Key (digest(content, 'sha512'::text))=(\xcf187c6795e52b388d721afeae11e86d115742bf22deae5e360b028643b51a1a1d13dd646fa72b90c8fe16edd372c2eeb76cdebe6424b18ce0a0aecc76290889) already exists.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The problem with this approach is that although we have a B-tree index that is orders of magnitude better at point selects than GIN or GIST indexes (reason is again worth another post), we can't really utilize this index unless we modify our query to operate on (digest(content, 'sha512')), rather than the vanilla content. Doable, not great though. &lt;/p&gt;

&lt;p&gt;Since we are doing hashing using the "digest" method, why don't we just use a hash index? We can't directly create a unique index using hash, as said in the aforementioned documentation. However, for some reason that is yet unclear to me, we can create a constraint that automatically creates a hash index to support it. The same index can also be used for point selects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;momen=# alter table simple_hash_only add constraint udx_simple_hash_only_by_content exclude using hash (content with =);
ALTER TABLE
Time: 3299.922 ms (00:03.300)
momen=# explain analyze select * from simple_hash_only where content = 'f0f453fd-8f7c-44f2-bcc0-124f706786cf';
                                                                    QUERY PLAN
-------------------------------------------------------------------------
Index Scan using udx_simple_hash_only_by_content on simple_hash_only  (cost=0.00..2.22 rows=1 width=41) (actual time=0.037..0.039 rows=1 loops=1)
Index Cond: (content = 'f0f453fd-8f7c-44f2-bcc0-124f706786cf'::text)
Planning Time: 0.214 ms
Execution Time: 0.065 ms
(4 rows)
momen=# insert into simple_hash_only values (1, 'f0f453fd-8f7c-44f2-bcc0-124f706786cf');
ERROR:  conflicting key value violates exclusion constraint "udx_simple_hash_only_by_content"
DETAIL:  Key (content)=(f0f453fd-8f7c-44f2-bcc0-124f706786cf) conflicts with existing key (content)=(f0f453fd-8f7c-44f2-bcc0-124f706786cf).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Simple Version
&lt;/h3&gt;

&lt;p&gt;Same as TL;DR.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Longer Version
&lt;/h3&gt;

&lt;p&gt;TL;DR plus the following:&lt;/p&gt;

&lt;p&gt;If you are storing long strings (greater than at least a few hundred characters on average) and/or searching with long patterns, use GIST instead of GIN, especially when your workload is comparatively write-heavy. If you know or can enforce the maximum length of the stored strings, a B-tree can be used for both uniqueness and point selects. &lt;/p&gt;

</description>
      <category>postgres</category>
      <category>powerfuldevs</category>
      <category>database</category>
      <category>workplace</category>
    </item>
    <item>
      <title>Why We Chose PostgreSQL for Our No-Code Platform: A Deep Dive into Database Decisions</title>
      <dc:creator>Yaokai Jiang</dc:creator>
      <pubDate>Wed, 18 Sep 2024 11:38:24 +0000</pubDate>
      <link>https://dev.to/momen_hq/why-we-chose-postgresql-for-our-no-code-platform-a-deep-dive-into-database-decisions-2cpl</link>
      <guid>https://dev.to/momen_hq/why-we-chose-postgresql-for-our-no-code-platform-a-deep-dive-into-database-decisions-2cpl</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://momen.app" rel="noopener noreferrer"&gt;Momen&lt;/a&gt; is a cloud-native no-code IDE (integrated development environment) that we've been developing since early 2020. It aims to significantly lower the hurdle one has to overcome to build an application capable of serving requests from a non-insignificant crowd. To that end, we first needed to decide how the shared state of that application was to be stored—in other words, what database technology we would use.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;We chose (the somewhat unconventional) PostgreSQL due to cost and SQL compatibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hard Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Serve at least 1 million users per app&lt;/strong&gt;: For a reasonably active user base, that translates to roughly 100 million API invocations per day, and peak traffic of around 2 million API invocations per minute.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ACID compliance&lt;/strong&gt;: Given that our target users were not going to be experienced backend software engineers, having ACID guarantees would be a must for the behavior of the system to be understandable. Explaining to a person “this group of modifications failed completely and none of them had any effect on the system” is much easier than telling the same person “the first two modifications succeeded but the last one failed, now you need to both detect this situation and provide a rollback strategy to repair the system. Oh, by the way, the rollback itself may also fail, so you need to rollback the rollback…”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reasonably cheap&lt;/strong&gt;: Every dollar we save on infrastructure is a dollar saving that we can choose to either pass onto our customers or our bottom line. Since the database is usually going to be the bottleneck of a system most of the time, the understanding was that a significant proportion of infrastructure spending would be allocated to it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Team familiarity&lt;/strong&gt;: The team should know enough or be able to ramp up on that database so when problems would arise, especially performance-related problems, the team would know how to act to resolve such issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No cloud-vendor lock-in&lt;/strong&gt;:  From the get-go, we had decided that we needed to be cloud-vendor independent. We indeed started with everything on Aliyun, in hindsight, it was quickly demonstrated that having vendor independence paid off as we encountered clients that were located in different continents and clients that needed on-premises deployment. Any vendor-specific technology would have been a great hindrance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Support for secondary index&lt;/strong&gt;: - Without a secondary index, the only choice left for efficient filtering and sorting would be what I would consider an excessive amount of caching, along which would come resource consumption, and the unavoidable problem of cache invalidation/consistency, essentially implementing one’s secondary index in the application layer. One exception to this would be small datasets. Many no-code vendors do take this approach. For example, Mingdao, a fairly popular no-code platform, has tables whose search time scales linearly with the number of records in that table (unless one is searching using the primary key, but that primary key is auto-generated and devoid of real-world meaning). The result is, for million-row tables, search time goes beyond 10 seconds, effectively unusable for any consumer-facing application. To be fair, Mingdao never claimed to be consumer-facing in the first place though.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8v38kc0x8t0xo666jbq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd8v38kc0x8t0xo666jbq.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Nice-to-Haves
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Appropriate level of abstraction&lt;/strong&gt;: It is technically possible to work with a simple KV (key-value) store, though it would be for example quite nice if we could have typing so that the database would catch some of the type-related bugs, as well as aid in debugging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Active development/community&lt;/strong&gt;:  So that we may benefit from future improvements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQL compatibility&lt;/strong&gt;:  SQL is a great declarative language that is also ubiquitous. It would give us at least three benefits.

&lt;ol&gt;
&lt;li&gt;Defers execution planning to the dedicated SQL engine.&lt;/li&gt;
&lt;li&gt;Leverages a plethora of existing SQL-compatible tools.&lt;/li&gt;
&lt;li&gt;A large portion of the workforce is trained in SQL, making onboarding easier.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Contestants
&lt;/h2&gt;

&lt;p&gt;Roughly grouped into two categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Relational databases (SQL)&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;MySQL family (MySQL, MariaDB)&lt;/li&gt;
&lt;li&gt;PostgreSQL&lt;/li&gt;
&lt;li&gt;Oracle&lt;/li&gt;
&lt;li&gt;Amazon Aurora / Aliyun PolarDB&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;NoSQL databases&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;MongoDB&lt;/li&gt;
&lt;li&gt;HBase&lt;/li&gt;
&lt;li&gt;Cassandra&lt;/li&gt;
&lt;li&gt;Redis&lt;/li&gt;
&lt;li&gt;DynamoDB (AWS) / TableStore (Aliyun)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqp8hs52d3gp3j1be0wi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwqp8hs52d3gp3j1be0wi.png" alt=" " width="800" height="531"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Initial Screen
&lt;/h2&gt;

&lt;p&gt;We first ruled out the following databases for not meeting some hard requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Aurora / Aliyun PolarDB&lt;/strong&gt;: Vendor lock-in&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Oracle&lt;/strong&gt;: Unaffordable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HBase&lt;/strong&gt;: No secondary index&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cassandra&lt;/strong&gt;: No secondary index&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redis&lt;/strong&gt;: Mostly in-memory and too expensive&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB / TableStore&lt;/strong&gt;: Vendor lock-in&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deeper Dive
&lt;/h2&gt;

&lt;p&gt;To begin choosing between MongoDB and Postgresql would not have been rational for me before I had familiarized myself with MongoDB in the first place, as I was no expert in this technology.&lt;/p&gt;

&lt;h3&gt;
  
  
  MySQL vs PostgreSQL
&lt;/h3&gt;

&lt;p&gt;Between the SQL RDBMSs, the choice was easy for us:&lt;/p&gt;

&lt;p&gt;Between the SQL RDBMSs, the choice was somewhat easy for us. They are generally very similar, so we focused on some details that we thought were important.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Since version 9.4 (2014) Postgresql has had support for JSON natively, supporting both B-Tree based index on the result of running an extraction function on a single attribute as well as the more generic GIN index, especially when combined with jsonb_path_ops, for efficient search for a non-predetermined key, key/value pairs. Since 5.7.8 (2015) MySQL relies on the creation of virtual columns and then indexes on such columns to achieve the same effect. Only after MySQL 8.0 was the support on indexing JSON-array-contains introduced. As of now, there is no support for generic JSON-object-contains.&lt;/li&gt;
&lt;li&gt;Postgresql has a flexible TEXT type used for all strings with arbitrary length. MySQL has 4 different TEXT types for various lengths, CHAR type, and then VARCHAR types where one has to know the length of potential input strings in advance. This is in our opinion an unnecessary and unacceptable hurdle for our users.&lt;/li&gt;
&lt;li&gt;Postgresql supports DDLs within a transaction, so during both migration (which we would be performing quite a lot of, automatically) and any necessary manual maintenance, we can be a lot more confident, as DDL statements either all ran or all rolled back. Such is impossible in MySQL.&lt;/li&gt;
&lt;li&gt;Postgresql has this great LISTEN/NOTIFY feature, where one can have triggers that send notifications to any listeners on transaction commit. This is fantastic for “live query” semantics that we were going to implement for GraphQL subscriptions. It can also serve as an invalidation mechanism for any future caching layer. There is a Github project that tries to mimic this for MySQL, &lt;a href="https://github.com/Cyclonecode/mysql-notification" rel="noopener noreferrer"&gt;https://github.com/Cyclonecode/mysql-notification&lt;/a&gt;, but it requires building and installing a plugin, also there is no guarantee of transaction commit when notifications are sent.&lt;/li&gt;
&lt;li&gt;Last but not least, we had and still have a pretty good in-depth understanding of Postgresql in the team. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l74dvvu91kroyy4fpap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l74dvvu91kroyy4fpap.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  MongoDB vs PostgreSQL
&lt;/h3&gt;

&lt;p&gt;To begin choosing between MongoDB and Postgresql would not have been rational for me before I had familiarized myself with MongoDB in the first place, as I was no expert in this technology.&lt;/p&gt;

&lt;h4&gt;
  
  
  MongoDB's Strengths:
&lt;/h4&gt;

&lt;p&gt;To summarize my findings on its strengths:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;MongoDB is schemaless even though it does come with different “collection”s, which are more or less equivalent to tables. But since no schema is enforced on anything, documents within each collection carry hints to the programmer what they contain. e.g. A collection of “employee”s probably represents an employee in real-life, with their names, DoBs, etc… The advantage of this approach is that one no longer has to design and define the schema for each entity, as well as that one theoretically no longer needs to run migrations.&lt;/li&gt;
&lt;li&gt;MongoDB natively offers the ability to scale out in the form of replica sets and/or shards. Replica sets are quite commonly seen in the SQL world as well, essentially multiple nodes contain exact replicas of the data, offering increased read performance at the cost of a slight write performance penalty. Where it tries to differentiate itself is native sharding support. MongoDB lets you shard your dataset based on a sharding key, and provides you a slew of tools such as automatic rebalancing of shards to accomplish that goal.&lt;/li&gt;
&lt;li&gt;MongoDB is widely adopted as the go-to database technology for many existing no-code / low-code solutions.
Then I researched even further and started comparing to what can be achieved using Postgresql, things did not seem as clear-cut as they first had appeared.&lt;/li&gt;
&lt;/ol&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Schemaless&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;Schemaless is fine and dandy until you start using that data. With strict schema enforcement afforded by a typical RDBMS, one is absolutely sure of what type each field is, (save when you declare everything as BLOBs). Without that certainty, during development, a hefty mental burden is placed on the developer, and typically results in a nasty array of if or case statements in multiple places, in which one then has to consider all possibilities, e.g. Is this field present, is it of type A or type B, etc… This problem is so prominent in “normal” backend development that we have native support in MongoDB itself for schema validation &lt;a href="https://docs.mongodb.com/manual/core/schema-validation/" rel="noopener noreferrer"&gt;https://docs.mongodb.com/manual/core/schema-validation/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpel0hwt0l3060nbj7yjv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpel0hwt0l3060nbj7yjv.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking back at other no-code offerings, it is immediately obvious that all of them have a semantic schema implemented for their users because most of them take the form of a “spreadsheet on steroid”, where you naturally have the concept of tables each with a fixed list of columns. These columns in turn determine what type of data may be stored. Some data may indeed appear in denormalized form in such columns, like lists or even lists of record references, but that is usually the extent they differ from a traditional relation schema.&lt;/p&gt;

&lt;p&gt;What we can conclude from this observation is that under the hood vendors do perform extensive schema maintenance, including but not limited to table creation/deletion/rename, column creation/deletion/rename to foreign key constraint enforcement. Alongside those, migration would be unavoidable from time to time as well. It is however not to say that MongoDB’s first strength we mentioned above is rendered completely moot given the application context.&lt;/p&gt;

&lt;p&gt;Postgresql naturally enforces a SQL-compliant schema as we are now all aware. The downside compared to MongoDB still exists even given the requirement of a schema. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Schema changes in Postgresql are explicit in the data storage layer, so some operations require storage layer locking (detailed discussion on locks later), notably adding/removing columns, although this problem is not as bad as it may appear on the surface. (&lt;a href="https://brandur.org/postgres-default" rel="noopener noreferrer"&gt;https://brandur.org/postgres-default&lt;/a&gt;) I ran an experiment on my personal Surface Book 2 using Postgresql 11. I first created a table and inserted 1 million rows, which occupied 1GB of disk space. Then I performed various schema-modifying operations repeated 10 times each, and took the average time taken:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbr85m8o7tmidt7olmw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbr85m8o7tmidt7olmw8.png" alt=" " width="705" height="149"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Migration is more expensive on Postgresql than on MongoDB. Postgresql employs a row-based storage engine. Together with its lack of in-place update semantics (somewhat alleviated by Heap-only tuples A.K.A. HOT and The Oversized-Attribute Storage Technique A.K.A. TOAST), Postgresql incurs significant cost when performing updates, which is the core of any data migration process. This is not the case with MongoDB, where partial document update was built-in from the get-go. On a reasonably simple table consisting of two long fields, two text fields, and two timestamp fields, updating one timestamp field on 1 million records (the entire table), took more than 6 seconds on the same Surface Book 2. On MongoDB, the same operation took 5 seconds, better, but not by much.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5u3ecpyv0ymz1n83f8t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5u3ecpyv0ymz1n83f8t.png" alt=" " width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Horizontal Scaling&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;Indeed, MongoDB offers great native support for horizontal scaling, in the form of both replica sets and shards, as we have mentioned above. But its support cannot escape CAP (consistency, availability, and partition-resistance) theorem. It is still subject to the same issues when it comes to distributed transactions and distributed joins, not to mention that it completely side-steps the issue of enforcement of foreign key constraints, as such was not offered to begin with. On the other hand, Postgresql’s support for horizontal scaling is not as barren as it might have appeared initially, all whilst offering much better single-node efficiency, to the point of eliminating the need for horizontal scaling (which comes at quite a hefty engineering and maintenance cost) for a great swath of applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Postgresql’s single-node performance is extremely impressive, especially for OLTP workloads with large datasets that do not fit in memory, compared to MongoDB (&lt;a href="https://info.enterprisedb.com/rs/069-ALB-339/images/PostgreSQL_MongoDB_Benchmark-WhitepaperFinal.pdf" rel="noopener noreferrer"&gt;https://info.enterprisedb.com/rs/069-ALB-339/images/PostgreSQL_MongoDB_Benchmark-WhitepaperFinal.pdf&lt;/a&gt;). The TL;DR version is that Postgresql 11 is 25x-40x times faster. This effectively means that for OLTP, a single Postgresql node can handle the load that would crush a whole cluster of MongoDB nodes. In this experiment, 16 vCPU gets Postgresql 25k tps, considering AWS and Aliyun both offer instances with 128 vCPUs, it is not unreasonable to expect 100k — 200k tps on a single node for Postgresql. That is more than enough to support most applications, save giants like Amazon, Facebook.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We have adopted a single-tenant approach to system design. The probability of any single tenant outgrowing 100k — 200k tps is just that much smaller.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There are third-party solutions for Postgresql for horizontal scaling such as Citusdata, which is maintained by Microsoft now. One worry that may arise with a third-party solution may be the timeliness of updates. Given Citusdata’s corresponding update was released 1 day after Postgresql 14’s GA, &lt;a href="https://www.citusdata.com/blog/2021/10/01/how-we-shipped-postgres-14-on-azure-within-a-day/" rel="noopener noreferrer"&gt;https://www.citusdata.com/blog/2021/10/01/how-we-shipped-postgres-14-on-azure-within-a-day/&lt;/a&gt;, I would say that is pretty solid evidence that Microsoft is pretty committed to this product. Citusdata offers both replication and sharding. On top of that, it offers distributed transactions as well as an extensive, though not a complete set of foreign key constraint support. Other than Citusdata, there are other Postgresql-compatible solutions such as YugabyteDB.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  &lt;strong&gt;Wide Adoption&lt;/strong&gt;
&lt;/h5&gt;

&lt;p&gt;MongoDB’s wide adoption is certainly helpful in talent acquisition from a company’s perspective, especially when that company is still in the start-up stage, where few candidates would familiarize themselves with a technology on purpose for a job opening. &lt;/p&gt;

&lt;p&gt;Luckily, this is when Postgresql’s similarity to MySQL and other RDBMSs becomes a saving grace. SQL-compliant relational databases are just so ubiquitous, as soon as one relaxes their hiring criteria from “Postgresql familiarity” to “relational database familiarity”, the talent pool suddenly materializes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MongoDB’s promise of freedom afforded by its schemalessness might have played a part. Instinctively, one associates no-code with no-schema. But as discussed above, no-code actually should place more restrictions on its users to ensure the predictability of the resulting software, our job is to reduce mental burden, not to increase it by affording them more freedom.&lt;/li&gt;
&lt;li&gt;Most of the lowcode/nocode platforms are built as multi-tenant systems, so the need for horizontal scaling may arise much sooner compared to a single-tenant system. The tenancy itself can become a good candidate for a sharding key in many cases. Therefore natively-supported sharding can be perceived as a great advantage.
None seemed particularly convincing to me, although I am of course open to differing opinions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf53fc7li0v8wwzv0b99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf53fc7li0v8wwzv0b99.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL's Advantages
&lt;/h3&gt;

&lt;p&gt;It comes down to efficiency, schema enforcement, and a built-in query planner/optimizer.&lt;/p&gt;

&lt;p&gt;We have already alluded to Postgresql’s efficiency when discussing MongoDB, how its single-node performance can be 25x-40x that of MongoDB’s. One of my intuitions is that Postgresql makes much better use of available memory than MongoDB. There are plenty of complaints about how MongoDB consumes too much memory on StackOverflow. One example I can find is &lt;a href="https://www.bmc.com/blogs/mongodb-memory-usage-and-management/" rel="noopener noreferrer"&gt;https://www.bmc.com/blogs/mongodb-memory-usage-and-management/&lt;/a&gt;, where 3.5GB worth of data was able to exhaust 8GB of memory under an unspecified query load. This surprised me. In hindsight, Postgresql has been exceptionally memory-friendly. In a past demo project, (though its scale surpasses most definition of a demo). &lt;/p&gt;

&lt;p&gt;Its typical workload involves creating “exam”s that consist of a random selection of 100 multiple-choice “question”s, and then going through them, displaying each question’s contents as well as the choices. The result of answering any “question” is recorded for score calculation and error collection. It had a meager 1GB memory allocation, and shamefully, no proper tuning, i.e. No changes to random_page_cost, effective_cache_size, shared_buffers, or work_mem. Yet, it performs swiftly. Typical latency is under 50ms.&lt;br&gt;
This efficiency translates directly to much lower hardware costs, perfectly meeting our “reasonably cheap” hard requirement.&lt;/p&gt;

&lt;p&gt;Schema enforcement is great for catching errors early. I think that will become our saving grace when it comes to preventing data corruption. However, this conclusion is only founded on patterns I have recognized in my past experience and common sense, rather than any scientific study.&lt;/p&gt;

&lt;p&gt;Unlike MongoDB, where the programmer has to algorithmically construct a query plan for any complex query, defined here as queries requiring at least one JOIN, Postgresql generates query plans for you and its query optimizer then picks the (hopefully) best plan based on statistics. While this can be roughly grouped under the “SQL-compatibility” category, this is really a great deal more useful than it may have appeared on the surface. Being a no-code platform, it is very difficult if not impossible for the programmer to know the distribution of data in each column, something that is essential to coming up with the appropriate query plan. And when it comes to query plans, the search space is often vast.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Filter first or join first?&lt;/li&gt;
&lt;li&gt;Index intersection or nested loop index lookup?&lt;/li&gt;
&lt;li&gt;Does hash join help?&lt;/li&gt;
&lt;li&gt; Do we materialize the actual join?&lt;/li&gt;
&lt;li&gt;…&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It boils down to designing and implementing one’s own query optimizer, an effort that I would categories as “re-inventing the wheel”, and without any immediately obvious benefits.&lt;/p&gt;

&lt;h3&gt;
  
  
  PostgreSQL's Main Issues
&lt;/h3&gt;

&lt;p&gt;Lock and bloat.&lt;/p&gt;

&lt;p&gt;Locks are things that nobody can get around when it comes to concurrent access to mutable states. It is not to say this problem is exclusive to Postgresql it manifests in MongoDB as well. It is just a lot more likely to happen with Postgresql.&lt;/p&gt;

&lt;p&gt;Altering a table itself is almost instantaneous. Common operations include add / drop columns. Both types need ACCESS_EXCLUSIVE lock on the table to be modified in Postgresql. In case another transaction is running, the ALTER TABLE command would then have a pending request on the table’s ACCESS_EXCLUSIVE lock, prevent any future access to that table until it finishes running, including reads. Such a scenario does not exist in MongoDB as no operation is needed for adding/dropping columns (other than perhaps deleting the columns’ content), and therefore side-stepping this issue entirely.&lt;br&gt;
Bloat is a (notorious) side-effect of Postgresql’s simple MVCC implementation. It is the phenomenon when a table undergoes update or delete, its and its indices’ disk consumption increases, until VACUUM that is. Other than just increasing disk consumption, by virtue of increased index size, the working set also increases, therefore reducing overall performance. This is a somewhat unique problem to Postgresql. Some workarounds do exist, such as pg_repack, but they all come at the cost of increased maintenance effort / temporary resource consumption.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45e71472yx2dcbgnhwc8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45e71472yx2dcbgnhwc8.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In the end, what swayed our decision towards Postgresql was cost and most importantly the existence of a mature query planner/optimizer. We thought the engineering effort we would have to expend for more sophisticated lock management and potential sharding management would be well worth the runtime savings and not having to “re-invent the wheel”.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;PostgreSQL&lt;/th&gt;
&lt;th&gt;MongoDB&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Serve 1 million users per app&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ACID compliance&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reasonably cheap&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Team familiarity&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;No cloud vendor lock-in&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Support for secondary index&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Appropriate level of abstraction&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Active development/community&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SQL compatibility&lt;/td&gt;
&lt;td&gt;+&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>powerfuldevs</category>
      <category>webdev</category>
      <category>beginners</category>
      <category>postgressql</category>
    </item>
  </channel>
</rss>
