<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: egor romanov</title>
    <description>The latest articles on DEV Community by egor romanov (@egor_romanov).</description>
    <link>https://dev.to/egor_romanov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/egor_romanov"/>
    <language>en</language>
    <item>
      <title>pgvector vs Pinecone: cost and performance</title>
      <dc:creator>egor romanov</dc:creator>
      <pubDate>Mon, 23 Oct 2023 15:49:47 +0000</pubDate>
      <link>https://dev.to/supabase/pgvector-vs-pinecone-cost-and-performance-22g5</link>
      <guid>https://dev.to/supabase/pgvector-vs-pinecone-cost-and-performance-22g5</guid>
      <description>&lt;p&gt;At Supabase, we believe that a combination of Postgres and pgvector serves as a better alternative to single-purpose databases like Pinecone for AI tasks. This isn't the first time a Postgres-based solution has successfully rivaled specialized databases designed for specific data types. Timescale for time-series data and Greenplum for analytics are just a few examples.&lt;/p&gt;

&lt;p&gt;We decided to put Postgres vector performance to the test and run a direct comparison between pgvector and Pinecone.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Pinecone?
&lt;/h2&gt;

&lt;p&gt;Pinecone is a fully managed cloud Vector Database that is only suitable for storing and searching vector data. It offers straightforward start-up and scalability. It employs a proprietary ANN index and lacks support for exact nearest neighbors search or fine-tuning. The only setting that allows you to adjust the balance between query accuracy and speed is the choice of pod type when creating an index.&lt;/p&gt;

&lt;p&gt;So, before we dive into their performance, let us first introduce Pinecone's offerings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pinecone has 3 Pod types for indexes
&lt;/h3&gt;

&lt;p&gt;An index on Pinecone is made up of pods, which are units of cloud resources (vCPU, RAM, disk) that provide storage and compute for each index.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Capacity / Vectors&lt;/th&gt;
&lt;th&gt;QPS&lt;/th&gt;
&lt;th&gt;Accuracy&lt;/th&gt;
&lt;th&gt;Price per unit per month&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;s1&lt;/td&gt;
&lt;td&gt;5,000,000 768d (~2,500,000 1536d)&lt;/td&gt;
&lt;td&gt;Slowest&lt;/td&gt;
&lt;td&gt;0.98&lt;/td&gt;
&lt;td&gt;$80&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;p1&lt;/td&gt;
&lt;td&gt;1,000,000 768d (~500,000 1536d)&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;0.99&lt;/td&gt;
&lt;td&gt;$80&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;p2&lt;/td&gt;
&lt;td&gt;1,100,000 768d (~550,000 1536d)&lt;/td&gt;
&lt;td&gt;Fastest&lt;/td&gt;
&lt;td&gt;0.94&lt;/td&gt;
&lt;td&gt;$120&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Pods can be scaled in 2 dimensions: vertically and horizontally:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;vertical scaling&lt;/strong&gt; can be used to fit more vectors on a single pod: x1, x2, x4, x8;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;horizontal scaling&lt;/strong&gt; increases the number of pods or creates replicas to boost queries per second (QPS). This works linearly for Pinecone: doubling the number of replica pods doubles your QPS.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Benchmarking methodology
&lt;/h2&gt;

&lt;p&gt;We utilized the &lt;a href="https://github.com/erikbern/ann-benchmarks"&gt;ANN Benchmarks&lt;/a&gt; methodology, a standard for benchmarking vector databases. Our tests used the &lt;a href="https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M/blob/main/README.md"&gt;dbpedia dataset&lt;/a&gt; of 1,000,000 OpenAI embeddings (1536 dimensions) and inner product distance metric for both Pinecone and pgvector.&lt;/p&gt;

&lt;p&gt;To compare Pinecone and pgvector on equal grounds, we opted for the following setups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;pgvector&lt;/strong&gt;: A single Supabase 2XL instance approximating ~410$/month (8-core ARM CPU and 32 GB RAM). An HNSW index with the following build parameters: &lt;code&gt;m='36'&lt;/code&gt;, &lt;code&gt;ef_construction='128'&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pinecone&lt;/strong&gt;: Vertically scaled pod to the minimum option that fits the dbpedia dataset into the index on a single pod. We then added replicas to match the budget (slightly exceeding in all cases with ~$480/month).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To reduce network latency, we placed our clients in the same cloud provider and region as the database. Experiments were run in a parallel configuration, varying the number of concurrent clients from 5 to 100 to determine the maximum QPS for each setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  Measuring accuracy in Pinecone
&lt;/h3&gt;

&lt;p&gt;There is no available information on Pinecone's proprietary ANN index. Likewise, they doesn't provide information about query accuracy, nor does it support exact nearest neighbors search (KNN). So to measure Pinecone's accuracy, we had to compare its results with pgvector's exact search (KNN without indexes) for the same queries. This seems to be the only way to measure Pinecone's index accuracy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarking Results
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pinecone with s1 pod type
&lt;/h3&gt;

&lt;p&gt;As the index can fit in a single s1.x1 pod ($80/month), we created five additional replicas. Our Pinecone setup consisted of &lt;strong&gt;six s1 pods&lt;/strong&gt; (totaling $480/month). We measured Pinecone's accuracy for the dbpedia dataset using the s1 pod type, achieving a score of 0.98 at the 10 nearest neighbors (accuracy@10).&lt;/p&gt;

&lt;p&gt;To match the measured .98 accuracy@10 of Pinecone s1 pods, we set &lt;code&gt;ef_search=32&lt;/code&gt; for pgvector (HNSW) queries, and observed the following results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1amjpu03rxnpinv7qze7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1amjpu03rxnpinv7qze7.png" alt="Pinecone s1.x1 vs pgvector HNSW" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pgvector HNSW index can manage 1185% more queries per second while being $70 cheaper per month.&lt;/p&gt;

&lt;p&gt;Interestingly, we'd often heard that the pgvector IVFFlat was too slow until the HNSW support was introduced. However, even the pgvector IVFFlat index on the same compute exceeds the Pinecone s1 pod and manages 143% more queries per second:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4xxdvniaelul1rrz4gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4xxdvniaelul1rrz4gw.png" alt="Pinecone s1.x1 vs pgvector IVFFlat" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  p1 pod type
&lt;/h3&gt;

&lt;p&gt;With Pinecone p1 pods, we can fit the dbpedia dataset into the index on a single p1.x2 pod ($160/month). So, by adding two more replicas, we maintained our budget. Therefore, our second experiment involved a Pinecone setup scale of &lt;strong&gt;three p1.x2 pods&lt;/strong&gt; (totaling $480/month). The measured accuracy@10 for the p1.x2 pod and dbpedia dataset was 0.99.&lt;/p&gt;

&lt;p&gt;To match the .99 accuracy of Pinecone's p1.x2, we set &lt;code&gt;ef_search=40&lt;/code&gt; for pgvector (HNSW) queries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfzhz8a7eugw6vxs5k0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfzhz8a7eugw6vxs5k0u.png" alt="Pinecone p1.x2 vs pgvector HNSW" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;pgvector demonstrated much better performance again with over 4x better QPS than the Pinecone setup, while still being $70 cheaper per month. As Pinecone can linearly scale by adding more replicas, you can estimate that you would need 12-13 p1.x2 pods to match pgvector performance. This equates to approximately $2000 per month versus ~$410 per month for a 2XL on Supabase.&lt;/p&gt;

&lt;h3&gt;
  
  
  p2 pod type
&lt;/h3&gt;

&lt;p&gt;This is Pinecone's fastest pod type, but the increased QPS results in an accuracy trade-off. We measured &lt;code&gt;accuracy@10=0.94&lt;/code&gt; for the p2 pods and the dbpedia dataset. It is possible to fit the index on a single p2.x2 pod (240$/month), so we could add 1 replica. Thus, Pinecone's setup for the third experiment consisted of &lt;strong&gt;two p2.x2 pods&lt;/strong&gt; (totaling $480/month).&lt;/p&gt;

&lt;p&gt;To match Pinecone's .94 accuracy, we set &lt;code&gt;ef_search=10&lt;/code&gt; for pgvector (&lt;a href="https://www.notion.so/Blogpost-pgvector-vs-Pinecone-cost-and-performance-5d5cc429818d45358ba2d3a8a06e1ecf?pvs=21"&gt;HNSW&lt;/a&gt;) queries. In this test, pgvector's accuracy was actually better by 1% with .95 accuracy@10, and it was still significantly faster despite the better accuracy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frca7h929r80tm62j2fp0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frca7h929r80tm62j2fp0.png" alt="Pinecone p2.x2 vs pgvector HNSW" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's something important to highlight: pgvector is faster than Pinecone's fastest pod type, even with an accuracy@10=0.99 compared to Pinecone's 0.94. Pinecone's most expensive option sacrifices 5% accuracy just to match pgvector's speed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional thoughts on Pinecone vs. pgvector
&lt;/h2&gt;

&lt;p&gt;It's only fair to note that Pinecone may be cheaper than pgvector since you could use a single p1.x2 pod without replicas, costing about $160 per month, and you would still achieve approximately 60 QPS with &lt;code&gt;accuracy@10=0.99&lt;/code&gt;. For pgvector on Supabase, this means you might not be able to fit the index in RAM as you may use a large ($110) or XL ($210) compute add-on and will fall back to KNN search without any indexes. However, this translates to potentially adding more vectors, similar to the s1 pod on Pinecone.&lt;/p&gt;

&lt;p&gt;Real user stories indicate this might not be problematic. For instance, Quivr expanded to 1 million vectors without using any indexes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff22kuhcupqc6bwowr6r0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff22kuhcupqc6bwowr6r0.jpeg" alt="A tweet from Stan Girard saying Is PGVector really that bad" width="800" height="1020"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  pgvector's hidden cost-saving benefits
&lt;/h2&gt;

&lt;p&gt;There are also a couple of benefits from a developer experience perspective that we often take for granted when using Postgres:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Postgres offers numerous features applicable to your vectors: database backups, row-level security, client libraries support and ORMs for 18 languages, complete ACID compliance, bulk updates and deletes (metadata updates in seconds).&lt;/li&gt;
&lt;li&gt;Having all your data in a sole Postgres instance (or a cluster) reduces roundtrips in production and allows running the entire developer setup locally.&lt;/li&gt;
&lt;li&gt;Implementing additional databases can increase operational complexity and the learning curve.&lt;/li&gt;
&lt;li&gt;Postgres is battle-tested and robust, whereas most specialized vector databases haven't had time to demonstrate their reliability.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Start using HNSW
&lt;/h2&gt;

&lt;p&gt;All &lt;a href="https://database.new/"&gt;new Supabase databases&lt;/a&gt; automatically ship with pgvector v0.5.0 which includes the new HNSW indexes. Try it out today and let us know what you think!&lt;/p&gt;

&lt;h2&gt;
  
  
  More pgvector and AI resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/increase-performance-pgvector-hnsw"&gt;pgvector v0.5.0: Faster semantic search with HNSW indexes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/docs/guides/database/extensions/pgvector"&gt;Docs pgvector: Embeddings and vector similarity&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/docs/guides/ai/choosing-compute-addon"&gt;Choosing Compute Add-on for AI workloads&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/fewer-dimensions-are-better-pgvector"&gt;pgvector: Fewer dimensions are better&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/hugging-face-supabase"&gt;Hugging Face is now supported in Supabase&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://supabase.com/blog/chatgpt-plugins-support-postgres"&gt;ChatGPT plugins now support Postgres &amp;amp; Supabase&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="http://supabase.com?utm_source=devto&amp;amp;utm_medium=referral&amp;amp;utm%5C_term=devtocta" class="ltag_cta ltag_cta--branded"&gt;🚀 Learn more about Supabase&lt;/a&gt;
&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Test Design using the Interface-Model-State Method</title>
      <dc:creator>egor romanov</dc:creator>
      <pubDate>Sun, 04 Jun 2023 16:43:57 +0000</pubDate>
      <link>https://dev.to/egor_romanov/test-design-using-the-interface-model-state-method-15ib</link>
      <guid>https://dev.to/egor_romanov/test-design-using-the-interface-model-state-method-15ib</guid>
      <description>&lt;p&gt;Introducing yet another method for developing functional test cases. Explore what happens when you base your approach on the architectural diagrams of the system being tested.&lt;/p&gt;

&lt;h2&gt;
  
  
  By reading this blog post, you will
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Master the &lt;strong&gt;Technique for Simplifying Complex Structures&lt;/strong&gt;, which involves deconstructing systems into components, making it easier to prepare functional test plans.&lt;/li&gt;
&lt;li&gt;Enhance your &lt;strong&gt;Analytical Thinking&lt;/strong&gt; through a step-by-step problem analysis framework.&lt;/li&gt;
&lt;li&gt;Gain a solid understanding of &lt;strong&gt;Test Design Strategies&lt;/strong&gt; by considering both positive and negative scenarios in functional test case design.&lt;/li&gt;
&lt;li&gt;Add a systematic and comprehensive approach to test design to your skillset, essential for anyone interested in test design and software testing.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I work in the field of automated testing and have repeatedly gone through the process of entering mature projects. Unfortunately, this usually takes more time than desired because a tester must have a good understanding of the business requirements, logic, and technical architecture of the systems being tested.&lt;/p&gt;

&lt;p&gt;During one of my interviews, I was asked a rather common question in the field: "How do you write test cases, or in other words, what is your method for developing tests that cover the functionality of the product?" Surprisingly, the question stumped me. I had read about the traditional approaches and an interesting concept from the book "How Google Tests Software," but none of them were fully applicable in my case.&lt;/p&gt;

&lt;p&gt;That's when I decided to formulate my own methodology and share it with others. I hope you'll find this article helpful, even if you're already using something similar.&lt;/p&gt;

&lt;p&gt;Meet my lovely ginger cat to brighten your mood 😺&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lXxMKRwT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/72o6bjsupv4wm03aoigk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lXxMKRwT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/72o6bjsupv4wm03aoigk.png" alt="Eve the cat" width="800" height="776"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Imagine you've just joined a project. Everyone is smiling, welcoming the newcomer, and always ready to help. As a first step, you open the documentation and diagrams to understand how everything works. And you're faced with something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0IFmCfiw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lf6f72aiwtubx1ikx0gt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0IFmCfiw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lf6f72aiwtubx1ikx0gt.png" alt="System overview" width="800" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gathering your strength, you start deciphering the "magic" hidden behind all these technicalities. Often, the system's API is not documented at all, and even when it is, it's hard to find in the architectural diagrams. The same applies to the business entities your project is dealing with. It's nearly impossible to find them within the diagrams. Lastly, we all know from experience that components can be in different states (simply available or not), as can the models: the mere presence of a status field automatically adds several states to an object. Listing all these states in the diagrams is something I've never seen (call it bad luck).&lt;/p&gt;

&lt;p&gt;Now, let me tell you what I do in such situations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Essence
&lt;/h2&gt;

&lt;p&gt;I start with the idea that any system being tested can be viewed as a set of components (these could be microservices, packages/modules, classes, etc.) and models that are either stored within it or passed through it. With this premise:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The components operate with a set of models, such as a user or order.&lt;/li&gt;
&lt;li&gt;Components and models can have states. For example, a user can be logged in, deleted, and so on. Some components may be available, unavailable, or "slow".&lt;/li&gt;
&lt;li&gt;Components have some interface (GUI, REST API, GRPC, event subscriptions, CLI, a set of public methods, etc.), which consists of methods (or event subscriptions), their parameters, and the results of their calls – i.e., returned responses, additional calls, dispatched events, or changes to the state of models or the component itself.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thus, if you create a system diagram consisting of components, marking their models, interfaces, and states, you can, first and foremost, visualize how the system works and what can affect it. Secondly, you can use combinatorics to develop a functional test plan for that system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's examine an example
&lt;/h2&gt;

&lt;h3&gt;
  
  
  I. Creating a scheme
&lt;/h3&gt;

&lt;p&gt;Let's imagine that we need to test a component of our system responsible for handling orders: the "Order component".&lt;/p&gt;

&lt;h4&gt;
  
  
  a. First, let's determine the components that interact with our "Order component"
&lt;/h4&gt;

&lt;p&gt;It has a database with orders, interacts with an external "Payment provider" service for processing payments, and there is also an event bus where our component sends events about API calls.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;All our components may be in one of the following states: available, unavailable, slow (requests are processed slowly). This includes the event bus.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iVHWa108--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8sg31gr6vkvzt22aopw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iVHWa108--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8sg31gr6vkvzt22aopw6.png" alt="System diagram with order service and payments provider" width="800" height="758"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  b. Now let's picture the models this part of our system operates with
&lt;/h4&gt;

&lt;p&gt;Undoubtedly, it is the &lt;code&gt;Order&lt;/code&gt; entity. If we think a little, only a user can create an order in our application, meaning that the order stores information about the user, and the API of our service can also be used by the user. So the second model is &lt;code&gt;User&lt;/code&gt;. Moving forward, orders also contain information about their contents, i.e., the products, so the third model is &lt;code&gt;Product&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Let's figure out in which states our models can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;User can be in one of the following states: non-existent, logged in, not logged in, deleted. Also, if they exist, they may be in one of 2 states: bank card information is provided or not.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Product can also be in one of the states: non-existent, has a quantity available (0 or more), deleted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Order can be in the states: non-existent, created, deleted, paid, and closed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zST4va9w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bg2bzwo32rcud6e8a3kr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zST4va9w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bg2bzwo32rcud6e8a3kr.png" alt="Models and states" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  c. Let's determine what interfaces our system components have
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;I. The "Order" service we are testing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can create an order;&lt;/li&gt;
&lt;li&gt;Cancel or delete an order;&lt;/li&gt;
&lt;li&gt;Pay for an order;&lt;/li&gt;
&lt;li&gt;Close it.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For simplicity, let's leave only the "payOrder" method for analysis. It accepts the order ID as an argument. However, from the business logic perspective, only a user can call this method. Additionally, the order contains information about the product and the user who owns the order by its ID. So the complete list of business arguments for the &lt;code&gt;payOrder&lt;/code&gt; method is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;User who called the method;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Order;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Products in the order;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let's figure out what happens when the &lt;code&gt;payOrder&lt;/code&gt; method is called:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Validation of arguments;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Calling an external service to make a payment and awaiting a response;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Updating the order status if it has been successfully paid;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sending an event to the data bus about whether the payment was successful or not;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Returning a response: the payment was successful or not.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An important note: as testers, we don't know the exact order of these events. Keep this in mind and take it into account.&lt;/p&gt;

&lt;p&gt;Models that the service works with, based on the arguments of the tested method, are &lt;code&gt;User&lt;/code&gt;, &lt;code&gt;Order&lt;/code&gt;, &lt;code&gt;Product&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;II. The "Orders DB" database:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It supports CRUDL for orders. Orders contain information about products and their owner users. Responses to calls are success or failure, as well as the absence of response. So the models are the same: &lt;code&gt;User&lt;/code&gt;, &lt;code&gt;Order&lt;/code&gt;, &lt;code&gt;Product&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;III. External integration - "Payment provider":&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Used by the Orders component only when calling the &lt;code&gt;payOrder&lt;/code&gt; method. It has an important API for us, in the form of a single method called &lt;code&gt;payByCard&lt;/code&gt;, which accepts the order number, its amount, and the user's bank card information. Responses to calls are also success or failure, as well as the absence of response. So, as per our business understanding, the models used are &lt;code&gt;User&lt;/code&gt; and &lt;code&gt;Order&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;IV. Also, we have the "Event Bus":&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Order component sends events about API calls to it. For &lt;code&gt;payOrder&lt;/code&gt; calls, there will be two events depending on the status: &lt;code&gt;orderPaid&lt;/code&gt; and &lt;code&gt;orderPayFailed&lt;/code&gt;. These events contain information about the order and the reason for failure for &lt;code&gt;orderPayFailed&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EVul20V2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zanl7axwtstgio7gkoij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EVul20V2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zanl7axwtstgio7gkoij.png" alt="Detailed system diagram with states, APIs and models" width="800" height="633"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Phew, it seems to be ready. Now we have a clear scheme in front of us that shows how our system works, its components and their connections, the business entities they manipulate, and their interfaces. The only thing left is to add a scheme for what happens when the tested component methods are called.&lt;/p&gt;

&lt;h3&gt;
  
  
  II. Test Design
&lt;/h3&gt;

&lt;p&gt;We have already looked at what actions should take place when we call the tested method. Let us recall and analyze the order of these actions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MvTYmMxi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5w8c2oteoxq472nisme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MvTYmMxi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5w8c2oteoxq472nisme.png" alt="System diagram with requests between components" width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  a. Argument validation
&lt;/h4&gt;

&lt;p&gt;Let's go step by step. We pass metadata about the user who called the method. We pass the &lt;code&gt;order ID&lt;/code&gt;, which in turn also contains the user who created the order. Thus:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Our method should return an error and attempt to send an &lt;code&gt;orderPayFailed&lt;/code&gt; event if the user who called the method is invalid, meaning, for example, deleted;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, the order must be validated to collect information about it. The component needs to access the database. If the database is unavailable, we must return an error and send an event to the bus that the payment failed (and if we think about it for a second, we should also trigger alerts and metrics about this because it seems abnormal, so we noticed another moment worth checking);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We collected the order data, compared it, and the order must exist, that's the first thing. Secondly, it must be in only one state - Created, otherwise we send the failure event and return an error. And finally, the user who called the method must match the one who created the order, otherwise we have an error and a failure event.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It seems ready, let's look over everything one more time, and… We notice that the order contains products, and products can also have different states. What if we are trying to place an order containing products that have already been deleted? And if the number of products in the order is greater than what we have left? Perhaps it is worth adding another component to our scheme and adding a step similar to checking with the database, only we will send a request to reduce the quantity of remaining products. If this call is unsuccessful, we must go through the negative scenario. This way, we found out that we missed an entire interaction, but by describing the scheme competently, we were able to notice this at an early stage - when composing test cases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5-nAV1eO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ct7mjzkjj466ji2884r8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5-nAV1eO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ct7mjzkjj466ji2884r8.png" alt="System diagram with products service" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's better now. If all checks have passed, we move on. Next, we must call an external service to pay for the order using a card. Stop. We have two more user states: card information present or not. This also needs to be validated, and if there is no card data, go through the negative case again.&lt;/p&gt;

&lt;p&gt;That's it for now. With validation finished, let's move on to the next step. Additionally, for negative scenarios, we must check that further steps have not been executed, subsequent calls have not been made, and nothing has changed in the database.&lt;/p&gt;

&lt;h4&gt;
  
  
  b. Calling an external service to make a payment
&lt;/h4&gt;

&lt;p&gt;Let's start with the states in which the service can be. It can be available or not (in fact, it can also respond slowly, this option should also be considered, but we will skip it to speed up).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If the service is unavailable, the negative branch is executed, returning an error and sending an &lt;code&gt;orderPayFailed&lt;/code&gt; event. In real life, it would be worth adding an alert and metric check and adding components to the scheme responsible for this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the service is available, we need to check the two possible responses - success and failure. If we get an error in response, the negative branch goes on; if success, we move on. It's worth thinking about the cases where we might get a negative response. It could be an invalid card, payment amount issues, or defects on the payment provider's side. These cases need to be checked.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The third option, which is also possible, is the absence of a response. It is necessary to check how our system and the tested Order component will react to this situation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If this part is successful, the following actions should be performed in parallel; the order does not matter much to us here, but with one nuance. Our component must succeed in the next steps because the user has already paid for the order.&lt;/p&gt;

&lt;h4&gt;
  
  
  c. Updating the order status
&lt;/h4&gt;

&lt;p&gt;The tested component must access the database and update the order status to Paid. Here again, it is necessary to understand how the Order component will behave, depending on the state of the database. What will happen if it is available? If not? Once again, I note that in all the negative cases we have considered so far, we should check that the tested application did not try to update the order.&lt;/p&gt;

&lt;h4&gt;
  
  
  d. Sending an event to the data bus about whether the payment was successful or not
&lt;/h4&gt;

&lt;p&gt;We have already checked the sending of payment error events in the negative cases reviewed earlier. I will only point out here that we should check what exactly is being sent in the &lt;code&gt;orderPayFailed&lt;/code&gt; event. Is the error there correct? If everything went well, we need to send an &lt;code&gt;orderPaid&lt;/code&gt; event about successful payment and check its body. Again, all these cases, both positive and negative, should take into account both a good scenario where the bus is available and a bad one where it is unavailable or slow. Do we have a buffer for such cases, and will we not lose events?&lt;/p&gt;

&lt;h4&gt;
  
  
  e. Returning a response: whether the payment was successful or not
&lt;/h4&gt;

&lt;p&gt;Should happen in any case. Again, it is worth checking the error texts and codes in any of the cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We've obtained a set of scenarios that can arise when working with the tested component. We have described the main business segments of our system. These include models, component interfaces, states in which components and models can be, as well as interactions that occur when using methods from the tested interface. Then, we can notice that we used combinatorics for models, interfaces, components, and their states to form a set of test cases.&lt;/p&gt;

&lt;p&gt;Thanks to this approach, we got a fairly extensive test plan. Since we used diagrams for everything and applied them when studying the functionality, we were able to notice that we initially missed a couple of points.&lt;/p&gt;

&lt;h2&gt;
  
  
  When I use it
&lt;/h2&gt;

&lt;p&gt;This approach has been particularly helpful for me when I had to test critical parts of functionality where I wanted to be highly confident.&lt;/p&gt;

&lt;p&gt;However, I actually refer to it always: both when I need to thoroughly test a feature, and when I need to figure out something new. Even when I need to quickly check something, I also use it but draw very abstract diagrams on paper or just keep them in my head.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit, Integration, E2E?
&lt;/h3&gt;

&lt;p&gt;The considered example focused on component and integration testing to a greater extent.&lt;/p&gt;

&lt;p&gt;But we could easily turn the cases, for example, into end-to-end tests. For the diagram drawn above, we would need to consider who is subscribed to the &lt;code&gt;orderPaid&lt;/code&gt; and &lt;code&gt;orderPayFailed&lt;/code&gt; events. Investigate what happens to subscribers upon receiving these events. We'd add, for example, withdrawal from reservation or, on the contrary, writing off products in the product component, sending notifications in the notification component using external services, and so on.&lt;/p&gt;

&lt;p&gt;To turn the set into unit tests, we simply need to mock and stub everything around (although some say there are no stubs in unit tests) 👍&lt;/p&gt;

&lt;p&gt;In addition, a UML diagram or a set of screens from an app can serve as a scheme. How to draw this is a matter of taste, convenience, and the level of the tested component.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thoughts
&lt;/h2&gt;

&lt;p&gt;Did I confuse you too much? :) Even so, this methodology only seems complicated at first glance. You can read it again and simultaneously put a piece of your system next to it. Break down everything exactly the same way for it. I'm confident that everything will go smoothly. In this case, practice is easier than theory 😉&lt;/p&gt;

&lt;p&gt;I hope my article will be useful to you!&lt;/p&gt;

&lt;p&gt;Share how you approach test design. I would also be happy to hear your opinion on how my thought process works. Drop me a message and let's discuss this topic.&lt;/p&gt;

&lt;p&gt;Find me on &lt;a href="https://twitter.com/egor_test"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/egor-romanov"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And visit &lt;a href="https://intest.dev"&gt;my website&lt;/a&gt; to find my public activities or read about my projects.&lt;/p&gt;

&lt;p&gt;Good luck to everyone!&lt;/p&gt;

</description>
      <category>testing</category>
      <category>tutorial</category>
      <category>learning</category>
      <category>api</category>
    </item>
    <item>
      <title>Building a Startup from Scratch: My Mistakes as CTO</title>
      <dc:creator>egor romanov</dc:creator>
      <pubDate>Wed, 01 Feb 2023 13:16:04 +0000</pubDate>
      <link>https://dev.to/egor_romanov/building-a-startup-from-scratch-my-mistakes-as-cto-1m5b</link>
      <guid>https://dev.to/egor_romanov/building-a-startup-from-scratch-my-mistakes-as-cto-1m5b</guid>
      <description>&lt;p&gt;When I was first approached to help build the technical side of a new startup, I had yet to learn what I was getting into. I was invited by a friend to audit the solution that the previous technical lead and developer had started. Still, due to unforeseen circumstances, both of them decided to leave the project. I was left with a barely started product and no team to continue the work.&lt;/p&gt;

&lt;p&gt;The startup was developing an app to help users find the best deals and businesses to make the most of their time and money. The app was supposed to connect users with companies that had excess inventory or capacity during off-peak hours, allowing them to take advantage of discounts and deals. The requirement was to build a mobile app for iOS and Android, as well as a web admin portal for business owners to manage their offerings and communicate with customers. Additionally, all purchases had to go through our app.&lt;/p&gt;

&lt;p&gt;With no team in place and a tight deadline, I knew I had to act fast. I started by assembling a team of engineers to build the backend, admin web portal, and mobile apps. While we had a clear vision of what we wanted to achieve and a solid plan in place, I knew that the vision and plan would change multiple times in the future. Finding the right engineers took more time than I had expected, and adjusting our strategy accordingly was crucial, but I was able to build a great team that could execute our vision and adapt to changing circumstances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bPZnG7mw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oitqvrcvepuwwd0nnl75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bPZnG7mw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oitqvrcvepuwwd0nnl75.png" alt="App screens" width="880" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a Scalable Backend with Microservices: Our Experience
&lt;/h2&gt;

&lt;p&gt;When I started building the backend for our startup, I knew scalability and adaptability would be the key. After an extensive search, I was able to find a highly skilled backend developer with experience in Node.js. Together, we decided to build our backend using a microservices architecture. We made this decision based on the dynamic nature of our startup requirements and the additional time we had before finding a mobile and web developer.&lt;/p&gt;

&lt;p&gt;I had some experience with infrastructure, so I took on the task of setting up the cloud, Kubernetes cluster, monitoring and logging, and coding infrastructure. We used GitLab for version control and a CI/CD pipeline to automate the build, test, and deploy process. We chose JSON-RPC as the communication protocol and Node.js for the backend. Our backend developer chose MongoDB as the database, while I would have preferred Postgres.&lt;/p&gt;

&lt;p&gt;We ended up with several microservices, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Products: This microservice contains information for managing the products and deals offered by partners, their retail locations, and promotions. Through it, one can create, update and moderate them. It also handles the launch and stopping of campaigns.&lt;/li&gt;
&lt;li&gt;Business Users: This microservice managed the companies and their employees and data in the Auth service on the admin panel side.&lt;/li&gt;
&lt;li&gt;Orders: It is responsible for handling the cart and lifecycle of orders, as well as integrating with the payment system.&lt;/li&gt;
&lt;li&gt;Gate: The microservice sits at two entry points from clients to the backend (from mobile devices and from the admin panel). It maintains a websocket connection between the client and the backend, directing requests either to the authentication service or to the facade.&lt;/li&gt;
&lt;li&gt;Admin Facade and User Facade are facade microservices for the backend. They distribute requests from clients to the services. It encapsulates the internal structure of the system and only grants access to the methods that are available to the client.&lt;/li&gt;
&lt;li&gt;Auth: This microservice was responsible for taking user authentication and authorization.&lt;/li&gt;
&lt;li&gt;File: The service was responsible for managing static resources (such as products photos or legal documents with partners), and it integrates with Yandex for data storage.&lt;/li&gt;
&lt;li&gt;App Users: This microservice was responsible for managing mobile app users. Among other things, it stores meta information about users: last seen, friends, etc.&lt;/li&gt;
&lt;li&gt;Settings: This microservice was responsible for managing the settings of the app.&lt;/li&gt;
&lt;li&gt;Marketing: This microservice was responsible for managing the marketing campaigns and promotions, and recommendations.&lt;/li&gt;
&lt;li&gt;The Email, Push, and SMS notification services are responsible for integrating with respective vendors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The platform had several external integrations: for payment with the vendor CloudPayments, for notifications with push, SMS, and email services, and for using static files (e.g. images) with Yandex Cloud Object Storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---goPEb5E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/so5pjidzluq6d8gya0ym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---goPEb5E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/so5pjidzluq6d8gya0ym.png" alt="Our architecture" width="880" height="547"&gt;&lt;/a&gt;&lt;/p&gt;
Our architecture



&lt;p&gt;It took us 2–3 months to find a mobile and web developer, but by that time, we had a solid backend infrastructure in place. We were able to change concepts and requirements multiple times during the development process, and the microservices architecture made it easy to adapt our backend accordingly.&lt;/p&gt;

&lt;p&gt;Our mobile developer was terrific and did a great job reworking the mobile apps a few times to match every new vision our CEO and design team had. Communication between client apps and the backend occurs through a websocket using the json rpc protocol. We used Vue.js on the frontend and React Native on the mobile side, which helped with consistency and code sharing in the team.&lt;/p&gt;

&lt;p&gt;It was overall great that we used JavaScript everywhere, as it helped engineers to read code of each other and make changes required to update how services were communicating, especially with the mobile app and web app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kJ3ESGQs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vyelaxqxd1mbo9bdhgzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kJ3ESGQs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vyelaxqxd1mbo9bdhgzu.png" alt="Landing on an iPhone screen" width="880" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  An Introduction to Supabase
&lt;/h2&gt;

&lt;p&gt;When we launched our startup, we posted a blog about the challenges we faced building the tech company. We received a lot of feedback from our community, some of which was negative, but also a lot of it was constructive. One of the pieces of feedback that stood out to me was the suggestion to use a service like Firebase to simplify our backend.&lt;/p&gt;

&lt;p&gt;At the time, I didn’t think using Firebase was a good idea, as it felt like a huge vendor lock and I was worried about losing control over our data and infrastructure.&lt;/p&gt;

&lt;p&gt;Spoiler: a few months later, our startup failed to gain traction, and we had to close it down. It was during this time that I came across Supabase while browsing through the latest Y Combinator batch. Supabase felt like the solution I should have found when I was starting my work on the startup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tV9NJa3A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/shlb3kmf760i6vwyqyoy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tV9NJa3A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/shlb3kmf760i6vwyqyoy.png" alt="meme about shut down startup" width="880" height="731"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s an open-source platform that aims to simplify the process of building a scalable and secure backend for web and mobile apps. Built on top of Postgres, it provides a set of tools and services for managing the database, authentication, realtime data sync, and storage objects while still giving you control over your data and infrastructure. Some of its key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic API generation: Supabase automatically generates REST, GraphQL, and realtime websocket notifications for your Postgres database, allowing you to quickly and easily access your data from the web and mobile apps.&lt;/li&gt;
&lt;li&gt;User authentication and authorization: Supabase provides built-in support for user authentication and authorization, making it easy to secure your app and protect sensitive data.&lt;/li&gt;
&lt;li&gt;Realtime: Supabase can keep your web and mobile apps in sync with the database, eliminating the need for manual data refresh.&lt;/li&gt;
&lt;li&gt;Storage: you can store large objects, like images or documents, and you can also make resize image requests.&lt;/li&gt;
&lt;li&gt;Scalable and secure: Supabase is built on top of Postgres, it can be easily scaled vertically and horizontally and has security features such as encryption and RLS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a1-2klVG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/78lzampnr4umeoi3lic4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a1-2klVG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/78lzampnr4umeoi3lic4.png" alt="https://supabase.com/docs/guides/getting-started/architecture" width="880" height="392"&gt;&lt;/a&gt;&lt;a href="https://supabase.com/docs/guides/getting-started/architecture"&gt;https://supabase.com/docs/guides/getting-started/architecture&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Supabase is an excellent choice for startups and small teams who want to build a backend quickly and easily without having to worry about the complexities of setting up and maintaining the whole infra themselves. And even in a big tech company, when you launch a new service, you should consider Supabase or similar OSS projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  What if Supabase
&lt;/h2&gt;

&lt;p&gt;In this section, we’re shifting gears and imagining how our startup would have been different if we had used Supabase from the start. Instead of spending a few months building microservices, we could have been focusing on what really mattered: our users and our product. I would invest all my time in searching for a mobile developer, and instead of infra, I would be able to focus on the backend. Supabase would have made setting up and managing a database a breeze, with built-in services that would have replaced most of our microservices. It would have saved us time, money, and headaches. And it wouldn’t have lost us the ability to adapt to changes and requirements, which was one of our most significant advantages. Unfortunately, we didn’t know about Supabase back then, but maybe you do, and it can change your startup’s story.&lt;/p&gt;

&lt;p&gt;Let’s look at what we could have replaced with different Supabase features.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://supabase.com/docs/guides/auth/overview"&gt;Auth&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;First, Supabase’s built-in authentication and user management service could have replaced our separate auth and user management microservices. This would have given us user registration, login, and manage user roles and permissions out of the box. Additionally, Supabase’s support for role-based access control (RLS) would have allowed us to implement fine-grained access to our data. For example, users could only view their own orders, while business owners could edit their offerings, and administrators could access all data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9H6KvmAO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ly7veos0ctkfvik8yrjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9H6KvmAO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ly7veos0ctkfvik8yrjg.png" alt="Create an RLS policy in Supabase so that company owner can manage his employees" width="880" height="519"&gt;&lt;/a&gt;Create an RLS policy in Supabase so that company owner can manage his employees&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://supabase.com/docs/guides/storage"&gt;Storage&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;Using Supabase’s built-in file storage would have simplified the process of handling file uploads, downloads, and management, eliminating the need for additional storage solutions. Instead of having a separate file microservice, Supabase’s built-in file storage would have allowed us to take advantage of its features, such as &lt;a href="https://supabase.com/docs/guides/storage/image-transformations"&gt;image resizing&lt;/a&gt; for product images, allowing us to create previews on the fly. Additionally, we could have used Supabase’s file storage to securely store and manage legal documents that needed to be signed by our business users, providing us with a centralized location to store and access all these important documents without the need for additional third-party services.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Request a small resized image of a product from Supabase Storage:&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;storage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mama_jane&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;download&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;pizza.jpeg&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;height&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;origin&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;a href="https://supabase.com/docs/guides/api"&gt;Gateways And Facades&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;With Supabase, we could have said goodbye to our two gateway and facade microservices, responsible for handling communication between the mobile app and web app with the other microservices. Supabase’s automatic PostgREST and GraphQL API would have taken care of all that, allowing us to focus on other things.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight graphql"&gt;&lt;code&gt;&lt;span class="c"&gt;# Retrieve a feed with products from app using Supabase generated GraphQL API&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="n"&gt;retailersCollection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;filter&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;active&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;eq&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;}},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="n"&gt;edges&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="n"&gt;productsCollection&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="n"&gt;edges&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="n"&gt;node&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="n"&gt;imageUrl&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="n"&gt;price&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Products
&lt;/h3&gt;

&lt;p&gt;When it comes to our products and settings microservices, it mainly functioned as a data owner, handling CRUD operations for our products. But with Supabase, we could have skipped these services altogether and instead used the power of Postgres directly. This would have allowed us to handle some of the more complex operations, like product updates that required transactions, directly in the database. And as for the internal hooks and cron jobs that were a part of this service, Supabase’s support for pg_cron, triggers, &lt;a href="https://supabase.com/docs/guides/database/webhooks"&gt;webhooks&lt;/a&gt;, and &lt;a href="https://supabase.com/docs/guides/functions"&gt;serverless functions&lt;/a&gt; would have done the trick just as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Notifications
&lt;/h3&gt;

&lt;p&gt;We could also have replaced our push, sms, and mail microservices with serverless functions and triggers on tables within Supabase. For example, we could have set up a trigger on the orders table to send a push notification to the user when their order is confirmed. The same goes for sending SMS and emails. We could have used triggers to automatically send messages when certain events occur, such as a user’s account being created or a new product being added. This would have greatly reduced the complexity of our architecture and made it much easier to adapt to changes in requirements. Imagine our marketing manager wants to run a campaign and send push notifications to users who have not made an order in the last 30 days. With Supabase, this could have been easily achieved by creating a simple trigger on the orders table.&lt;/p&gt;

&lt;h3&gt;
  
  
  Marketing campaigns
&lt;/h3&gt;

&lt;p&gt;By the way, this example shows us that we no longer need marketing service too. This is because we could have set up automated campaigns triggered by specific actions, like in the example above. Or by introducing a new table called marketing_campaigns. Our marketing manager could then simply insert a new row to the table with parameters, such as what users to notify. A trigger on that table would invoke a serverless function to send out push notifications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;--Trigger to send push notifications after the&lt;/span&gt;
&lt;span class="c1"&gt;--marketing specialist adds a marketing campaign to the database.&lt;/span&gt;
&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="k"&gt;or&lt;/span&gt; &lt;span class="k"&gt;replace&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;insert_marketing_campaign&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;returns&lt;/span&gt; &lt;span class="k"&gt;trigger&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="err"&gt;$$&lt;/span&gt;
&lt;span class="k"&gt;begin&lt;/span&gt;
    &lt;span class="k"&gt;insert&lt;/span&gt; &lt;span class="k"&gt;into&lt;/span&gt; &lt;span class="n"&gt;marketing_campaigns&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_group&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start_date&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;values&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_group&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;start_date&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;perform&lt;/span&gt; &lt;span class="n"&gt;send_push_events&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="err"&gt;$$&lt;/span&gt; &lt;span class="k"&gt;language&lt;/span&gt; &lt;span class="n"&gt;plpgsql&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;create&lt;/span&gt; &lt;span class="k"&gt;trigger&lt;/span&gt; &lt;span class="n"&gt;insert_and_send_push&lt;/span&gt;
&lt;span class="k"&gt;after&lt;/span&gt; &lt;span class="k"&gt;insert&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;marketing_campaigns&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;each&lt;/span&gt; &lt;span class="k"&gt;row&lt;/span&gt;
&lt;span class="k"&gt;execute&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;insert_marketing_campaign&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;a href="https://app.supabase.com/projects"&gt;Admin studio&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;In retrospect, I realize that building a custom admin portal for our business clients may not have been the best decision. Despite my initial reservations, my partners insisted on its development. However, as it turned out, our clients were not quite ready to navigate a new and unfamiliar interface. The Supabase dashboard would have made it easy for our sales team to manage our business customers’ offerings. Maybe I had convinced my partners to hold off on developing a separate, custom-built admin portal until we had more user traction and a better understanding of their needs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pQDGUrC0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/snd6xyvge0ppqzssba6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pQDGUrC0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/snd6xyvge0ppqzssba6q.png" alt="User management in the Supabase dashboard" width="880" height="500"&gt;&lt;/a&gt;User management in the Supabase dashboard&lt;/p&gt;

&lt;h3&gt;
  
  
  Orders
&lt;/h3&gt;

&lt;p&gt;And it leaves us with the orders service only. We could have technically replaced it with serverless functions and triggers, but I would have preferred to keep it as is. The main reason for this is that I am more comfortable like that as it is a very sensitive one. But, if we had used Stripe as our payment provider, we could have taken advantage of Supabase’s new wrappers functionality that uses &lt;a href="https://supabase.com/blog/postgres-foreign-data-wrappers-rust"&gt;postgres_fdw&lt;/a&gt; to send queries directly to Stripe from within Postgres, and that could have made a deal. However, with a bit of extra time and effort, it is possible that we could have created our own wrapper for our payment provider to integrate it with Supabase.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Listen for stripe events using Supabase Edge Functions &lt;/span&gt;
&lt;span class="c1"&gt;// (to keep track of invoice-paid events, for example)&lt;/span&gt;
&lt;span class="nx"&gt;serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;signature&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Stripe-Signature&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;receivedEvent&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;receivedEvent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;stripe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;webhooks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;constructEventAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;signature&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;Deno&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;STRIPE_WEBHOOK_SIGNING_SECRET&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;cryptoProvider&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;receivedEvent&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Other thoughts
&lt;/h3&gt;

&lt;p&gt;Our startup also relied heavily on geographical data. We could have leveraged the power of PostGIS, a spatial database extender for Postgres, to handle all of our geographical data needs. This would have allowed us to easily incorporate features such as location-based searching and mapping within our app. Overall, utilizing Supabase and its integration with PostgresSQL would have greatly simplified our architecture and allowed us to focus on developing our app’s core features. Using Supabase could have been a game changer for our startup. I don’t think it could save us from closing, but it would definitely save us some money on the development and infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rFVROkuc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ozpayq0f52fvgza5ic2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rFVROkuc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ozpayq0f52fvgza5ic2c.png" alt="Map with points of interest from app" width="880" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison
&lt;/h2&gt;

&lt;p&gt;Using Supabase, we would have eliminated the need for a Kubernetes cluster for both production and staging, as well as our managed MongoDB instance and monitoring infrastructure (we used elastic stack). This would have reduced the infrastructure costs for our startup. In addition, it would have allowed me to focus on searching for only one developer to work on our mobile apps rather than needing to find several engineers to work on microservices and the admin portal. This would have resulted in cost savings of up to 6–7 thousand dollars per month, which could have been invested in other business areas. Overall, Supabase offered a simpler and more cost-effective alternative for our startup and is worth considering for any startup or service within a big company. At least, this is my opinion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--q6t2RxTF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/doxhunkwrexk2fq1ebjq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--q6t2RxTF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/doxhunkwrexk2fq1ebjq.png" alt="An example of what our architecture may have looked like if we used Supabase." width="880" height="642"&gt;&lt;/a&gt;An example of what our architecture may have looked like if we used Supabase.&lt;/p&gt;

&lt;p&gt;And what’s important is that I am not afraid of being vendor-locked. You can work with Supabase using a variety of programming languages, including JavaScript, Dart, Python, or Go. This allows for flexibility in building and maintaining your application. Additionally, Supabase is designed to scale, making it suitable for both small startups and large enterprises. It can be used in the cloud or on-premise and integrated with other open-source projects. This allows for a high degree of customization and flexibility in building and deploying your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, building a microservices architecture can be a challenging and costly endeavor, as we experienced in our own startup journey. However, &lt;a href="https://supabase.com/"&gt;Supabase&lt;/a&gt; offers a simpler and more cost-effective alternative, with built-in features that can replace many of the microservices that a typical startup would need. From user management and file storage to realtime APIs and automatic data management, Supabase has the potential to save both time and money.&lt;/p&gt;

&lt;p&gt;While I could not use Supabase in our own startup, I hope our experience and insights will encourage others to consider it a viable option for their own projects. While Supabase may not be the best fit for every project, it’s worth considering as an option or at least looking at the market for other alternatives that may better suit your needs. I would highlight Pocketbase as a possible one, but I would still choose Supabase for the majority of projects. And only use Pocketbase in certain infrastructural development projects where a significant amount of custom Golang code is required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C2Fi40Vu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hvop8kzs8vhsu76nsd9z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C2Fi40Vu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hvop8kzs8vhsu76nsd9z.png" alt="Few app screens" width="880" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  P.S.:
&lt;/h2&gt;

&lt;p&gt;It has been some time since I wrote the first draft of this article, and as I reflected on how Supabase could have helped my previous startup, I couldn’t help but desire to be a part of the Supabase team. Imagine my surprise when I received an email from them announcing that they were hiring for a QA position. I immediately applied, and to my delight, I received an offer. Now, as a member of the Supabase team, I have had the opportunity to work with the platform on several personal projects and witness firsthand how it assists other startups and large companies in building their products. It is exciting to be a part of a company that is making such a significant impact in the tech industry.&lt;/p&gt;

&lt;p&gt;If you haven’t tried out Supabase yet, &lt;a href="https://supabase.com/"&gt;you should give it a try&lt;/a&gt;! And if you liked the design of an app, don’t mind reaching out to &lt;a href="https://choice.studio/"&gt;choice.studio&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>startup</category>
      <category>programming</category>
      <category>architecture</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
