DEV Community

Cover image for Building a Cost-Efficient No-Code Full-Stack Webapp Development Platform
Yaokai Jiang for Momen

Posted on

Building a Cost-Efficient No-Code Full-Stack Webapp Development Platform

TL;DR

Architect right with performance and cost in mind, and it is entirely feasible.

Intro

Developers and entrepreneurs choose no-code for the promise of time-to-market and low development cost. However, a lingering and often not unfounded worry accompanies this decision, namely, "what if my idea actually took off?".

I started Momen with this in mind, "this" being a potentially unheathy obsession with efficiency, an obsession I had before I even started Momen. I am a strong believer it is the increase of efficiency that gave us the abundance we now enjoy. So, from the get-go, I architected and budgeted Momen to be cost-efficient and scalable (to an extent, 1M DAU probably). In this article, we’ll break down the strategy and technical choices we’ve made to build a no-code platform that should take you from MVP to 1M DAU, without breaking the bank (for both our clients and us).


Foundations of Cost-Efficient Architecture

Choosing Open Standards and Avoiding Vendor Lock-In

I personally have no axe to grind with proprietary technologies like DynamoDB, Spanner, lambda or whatever cloud providers built. In fact, I think they are great choices in many situations. However, momen should not depend on proprietary technologies. Choosing a no-code platform is already a big commitment, piling on top of it another commitment to a particular cloud provider to me is just too much. I want to keep the options of our clients open as much as possible. So they can "bring-their-own-cloud". Kubernetes, java, spring, postgres, ceph, minio, etc...

Leaning on Kubernetes for Deployment

Kubernetes is the backbone of how we manage deployments in a cost-conscious, cloud-agnostic way. Yes, it comes with its own resource consumption, sometimes too much (Think istio and sidecars). Combine it with containerization, and you’re suddenly free to deploy wherever you want, scaling as needed without worrying about vendor-specific APIs or infrastructure quirks. Add StorageClass and StatefulSets into the mix, and now you’ve got persistent storage handled elegantly, no matter the cloud provider.

Part of the cost of running a cloud dev/host platform is operational cost, and by leveraging kubernetes, we are able to build an automated solution with portability to an extent, greatly reducing cost, especially when it comes to "bring-your-own-cloud" deployments.


Optimizing the Database for Cost Efficiency

Choosing PostgreSQL for the Core Database

PostgreSQL is performant and resource-efficient. We are able to run postgreSQL with a minimum of 0.25 vCPU and 256MB of RAM, while maintaining reasonable performance for most clients (they can scale up their database if needed). This has been crucial to keeping cost down. Although we are still far away from being able to give our clients an RDS-like experience, we were able to offer pre-tuned databases that should satisfy most users' needs.

PostgreSQL's vast array of extensions also significantly reduce the cost of development for us so we are able to offer more common functionalities to our users without much additional dev. Prime examples are: postGIS and pgvector.

Using PostgreSQL as Designed for Optimal Performance

Now, here’s where many developers stumble. PostgreSQL is robust, yes, but if you misuse it, you can still tank your system’s performance. Many solutions abuse features like JSONB or HStore—turning their relational database into a chaotic hybrid of data types. Sure, those features have their uses, but over-relying on them consumes more disk space, increases I/O, gives up referential integrity and messes with the optimizer. The last point is especially note-worthy, as unless the developer is cognizant about the fact that postgreSQL's default statistics mechanism is almost completely unsuited for filtering / joining / sorting on JSONB attributes, and manually create the appropriate statistics, query planning can be completely off and potentially slow queries down by a factor of a thousand or even a million.

At Momen, we play by the rules. While we do support JSONB fields and expose them to our users so they can choose to use it, we stick to PostgreSQL’s strengths—tables, columns, indexes—whenever possible, to ensure performance doesn’t degrade as the platform scales. Use the database as it was meant to be used, and it’ll reward you with speed, reliability, and scalability.

Efficient Multi-Tenancy with PostgreSQL Schemas

Momen handles multi-tenancy through postgreSQL schemas. Rather than spinning up a new database for each project (which we used to do), we isolate each project within its own schema, also known as namespaces. This lets us multiplex a single database instance among as many as 1000 free projects, all whilst ensuring each project does not see or in any other way interact with a different project's database.

PostgreSQL as a Queue

Why bring in more moving parts when you don’t need to? Instead of introducing a separate queuing system like RabbitMQ, we repurpose PostgreSQL tables to act as our queue. We use a combination of SELECT * FROM FOR UPDATE SKIP LOCKED, LISTEN/NOTIFY and dedicated worker threads to construct the queue. Sure, the throughput is not going to be in the millions per second or even hundreds of thousands per second, but most projects have no need for that. We are able to maintain exactly-once semantics for each message, while saving around 1 GB of RAM per database.

Leveraging PostgreSQL Extensions for Enhanced Capabilities

As mentioned previously, we integrate deeply with postgreSQL's extensions too:

  • pgvector: Need similarity search but don't want to introduce a separate vector database? No problem. pgvector lets us handle that directly within PostgreSQL.
  • PostGIS: Geospatial queries are heavy by nature. But instead of resorting to a dedicated geospatial database, we integrate PostGIS into PostgreSQL. It handles those queries with efficiency and precision, without inflating costs.

Designing an Efficient Backend for High Performance

Java + Spring Boot for Backend Services

When it comes to backend services, the Java ecosystem is a tried-and-true solution that provides both reliability and scalability. Performance-wise, Java is also much faster than other popular languages like Python or Ruby. Compared to Node.js, Java is naturally multi-threaded, providing the ability to utilize more than one core without extra scaling logic, which is quite nice when paired with our Kubernetes Resource Control, as it can boost CPU utilization beyond what is allocated as long as all other services are idle. It is true that the JVM has much higher memory overhead compared to Node.js, this is partially mitigated by having multi-tenancy enabled on a single JVM.

Forgoing ORM for Direct Database Access

ORMs, standing for Object Relational Mapping, are quite popular choices when it comes to interacting with the database in the server. It has its places but we have decided that such an abstraction is not suitable for no-code runtime, as it adds too much resource consumption in terms of memory, and makes query generation much less predictable than something like jOOQ. Combined with Dataloader, we generate efficient SQL queries, avoiding those annoying n+1 query problems that exist commonly in ORM integrations.


Automation and Efficiency Enhancements

Automating Schema Migration

One of the drawbacks of using relational databases is that we now have to handle database migration. It is actually quite a difficult problem. But with this problem solved, our users can then freely change their database's table structure, including relationships between tables.

Automatic Slow Query Detection and Index Creation

Indexes are crucial to ensuring performance and simultaneously reducing cost for any reasonably-sized projects. On the flip side, over-indexing can reduce performance and increase cost as updates, planning, vacuuming all become more expensive. Most of the time, appropriate indexing is a skill that is beyond the reach of most developers, be they code or no-code. At momen, we automate the detection of slow queries and have a system in place to automatically generate indexes where needed, taking that burden off developer's shoulders and ensuring your apps are performant and cost-efficient.

Automated Generation of Pre-Signed URLs for CDN/S3

File uploads are typically done in two ways. Either use the server as a relay, or direct upload to S3 from client. Server relay is more costly, has the potential to create bottlenecks, but offers more control. At Momen, we decide to bypass the server. We ensure access control by generating pre-signed URLs on the server. Users then use that pre-signed URL to upload files directly to the storage service.

Frontend Dependency Detection

One of the most important optimizations we’ve made is in how we fetch data from the backend. Using automated dependency detection, the runtime frontend only requests the fields it needs for rendering—nothing more, nothing less. This cuts down on excess data transfers, reduces the query load on the database, and ensures a faster user experience. Multiple levels of related data can be fetched in one-go, reducing both round-trip time and connection overhead on the client as well as the server.


Advanced Logging, Monitoring, and Type-Checking

Using a Dedicated Rust Backend for Logging and Monitoring

For logging and monitoring, we’ve turned to Rust, a language that excels in high-performance, low-latency applications. By dedicating a Rust backend to handle logging and monitoring tasks, we minimize the impact on the core system while still gathering crucial insights. A separate postgreSQL database is used to store and analyze this data instead of more common logging solutions like ElasticSearch, as postgreSQL is more than enough for our users' scale while being orders of magnitude cheaper to run compared to in-memory solutions.

Type-Checking in the Browser

Keeping type-checking on the user's browser rather than relying on server-side processing greatly reduces server cost. While the actual compute cost may not be excessively high, in order to reliably check for type, we need to load most of the project into memory (the equivalent of source code). This consumes a lot of memory, and poses challenges as to when we can unload them. Since the project is already loaded in user's browser's memory, keeping the type-checking logic there not only reduces the load on our servers, but also speeds up the development cycle by providing instant feedback to developers. I'd call it a lightweight, distributed approach that improves both performance and developer productivity.


The Path to Sustainable Scalability

The Long-Term Cost Benefits of Efficient Engineering

It’s tempting to cut corners early on, but investing in smart, efficient engineering pays off tenfold in the long run. Our philosophy at Momen.app is to build once—build well. We are not perfect by our own standards, but that is what we try to achieve. By making strategic architectural decisions upfront, we can avoid the pitfalls of scaling that many no-code platforms encounter. This results in a platform that can grow without spiraling costs.

Cost-Effective and Scalable No-Code Development

Here’s the bottom line: No-code platforms can be scalable and cost-effective, but only if you put the right architecture and engineering practices in place. It is no different to any software. At Momen.app, we are to prove that with the right mix of open standards, automation, and efficient design, you can deliver powerful no-code solutions that don’t buckle under pressure—ensuring that both the platform and its users thrive.


Conclusion

Building a no-code platform that balances cost and scalability is no easy feat, but it’s entirely possible with the right strategy. By investing in the right tools, choosing the right architectures, and leveraging automation, we will continue to create a system that can grow with our users without runaway costs. No-code doesn’t have to mean inefficient—it just takes smart engineering, and a little bit of foresight.

At Momen, we’re ready for the future, and our clients are too.

Top comments (0)