This is my stack and my process. It’s not the best and there are many others, but this one is mine 😊
In January of 2021, I quit my job as the CTO of a healthcare services company to be my own boss. In April 2021 I decided to pivot from my original business idea to a B2B SaaS software product called Assetbots. While the story leading up to the pivot is interesting, I will have to save it for another time. For now, I’d like to talk about my software development process, technology stack, and deployment process.
In this post, I’ll cover:
- Technology and Process Goals for Indie Founders
- My Tech Stack
- My Hosting Stack
- How I Deploy
- How Much it Costs
- Takeaways and Thoughts for the Future
Technology and Process Goals for Indie Founders
The primary goal for any indie founder should be speed. Working from that perspective, every choice you make while building your business that slows you down should be evaluated based on its Return on Investment of Time (ROIT). This may seem simple and obvious, but it is important enough that it cannot be overstated.
I have made many choices that have slowed me down. In fact, building Assetbots has not been particularly fast. However, I constantly review my ROIT to make sure I’m making what I feel are the right compromises for my business. Even though speed is my primary goal right now, it is not my only one, and whether I am meeting my speed goal or not must be evaluated in context.
In addition to speed, other technology and process goals that I believe are critical for indie founders are:
- Familiarity You must think about so much more than the technology of the product when building a business, so it is important to limit how much new tech you need to learn.
- Simplicity You will be context switching between development, marketing, sales and administration, so your tech and your processes should be simple and understandable.
- Repeatability It should be easy to do what works over and over again. Both from a process perspective (creating a proposal, keeping track of feedback, evaluating metrics) and from a development perspective (adding a feature, fixing a bug).
- Price Whether you’re bootstrapping (like me) or not, price matters. Don’t spend $1,000 when you could spend $100, but don’t obsess over getting that down to $10 either.
Of course, all of these goals are nuanced, and every business is unique. However, keeping speed, familiarity, simplicity, repeatability and price in mind as I work my way toward product-market-fit has been invaluable for me and the health of Assetbots so far.
My Tech Stack
Assebots is a B2B SaaS delivered exclusively as a web application over the public internet. There are effectively an infinite number of technologies you could choose to build such a product, but here are the ones I use. From the bottom, up:
Database
All non-binary data is stored in Microsoft SQL Server. Like many of my tech stack choices, I chose SQL Server because of its combination of familiarity and simplicity (in the context of the rest of my stack and my history). While it’s not the best or cheapest tool for the job in the absolute sense, it is both the best and the cheapest for me and my business right now.
Server
On the server, Assetbots is written in C# using .NET 6. The server is a monolith application that
- Talks HTTP and delivers HTML
- Provides API endpoints
- Handles webhooks
- Talks to clients via websockets
- Spins up Node.js processes
- Collects and reports analytics
- Runs background jobs
- Talks to third-party services
- And more
It’s the opposite of a microservices architecture. Moreover, it’s all developed in a single Visual Studio solution with over 100 individual projects.
This one solution produces multiple web applications, console applications and client (JavaScript) applications when built – including the marketing site and main web application. A cold build takes almost 5 minutes on my laptop (although most of that time is spent in Webpack). Despite this, it’s still a positive ROIT for one reason: most of this code existed before I pivoted to Assetbots. I got a huge head start by not having to develop (or learn!) a database layer, auth layer, permissions system, common app services, build and deployment systems, and more. And despite how slow a cold build is, I rarely have to perform one.
Client
On the client, Assetbots is written in TypeScript using React. Like the server, the client is a monorepo using Yarn workspaces and contains the code for all of the web applications produced by the Visual Studio solution, as well as some additional services like MJML email templating.
While I use a ton of third-party libraries, a lot of the client is hand-rolled. I think Next.js and create-react-app are great, but this code has been adapted across multiple projects of mine and is therefore faster for me to ship with.
A few additional details on my client stack:
- I use a hand-written Webpack configuration that is modularized and shared among all my projects. It is slow, but it provides all the batteries.
- I use tailwindcss with a light sprinkling of Material Design as a starting point for design and layout. I do not have a designer, so it is essential that I can make attractive, usable interfaces easily.
- There is no Redux, MobX or any other state management to be found. Most state is handled at the feature level using React context.
- State that is synced with the server is handled using a combination of React Query and Replicache. I’ll be doing a writeup about my Replicache architecture in a future post.
- The client is pre-rendered on the server and then hydrated. The code for this is custom but not overly complicated, and allows me to achieve Next.js-level initial render performance in production:
Marketing
The marketing site is developed exactly like the main web application using the same basic architecture. The big addition is an integration with DatoCMS for content management.
As I’m sure you’ve noticed, there is a theme here. The marketing site can access and make use of all the code developed for the web application, on both the client and the server. In a vacuum, it would be faster to develop the marketing site using a tool like Next.js, but in context I was able to launch faster this way.
My Hosting Stack
Assetbots is hosted in Azure exclusively through PaaS services. I don’t use any virtual machines or containers. I maintain four subscriptions, one for each environment: development, test, quality assurance and production.
The architecture is very simple and looks like this:
-
cmn
Common services for an entire environment (development, test, QA or production). As of writing, only a Key Vault and a Storage Account are part of this layer. -
www
Services for hosting the marketing site. This includes a Key Vault, Storage Account, Service Bus Namespace, Azure SQL database and an App Service. -
app
Services for hosting the web application. This includes a Key Vault, Storage Account, Service Bus Namespace, Azure SQL database, SignalR service and an App Service.
And that’s it. My goal with this is to strike the right balance of cost, reliability and maintenance overhead. This deployment is expensive (more on that below), but not so expensive that it changes how much runway I have. In return for the price, I get four completely isolated environments that are defined entirely in code and have proven extremely reliable so far.
How I Deploy
There are two types of deployments for Assetbots: infrastructure and code. While I use similar tools for both, they are not identical.
Deploying Infrastructure
Infrastructure (in other words, my hosting architecture) is defined entirely in code using a combination of Azure Resource Manager templates and PowerShell scripts. Terraform is the standard in the industry, but ARM templates are more than sufficient for my simple use case. Using ARM templates, I’m able to define a single file that deploys my entire architecture idempotently, in parallel.
When I need to deploy an infrastructure change (for example, I recently upgraded my Node.js version from 14.16.0 to 16.9.1), I update the relevant Template.json
file, commit the change, and push to develop
. Within a second or so, I can navigate to my GitHub Actions panel, choose the workflow for the environment I want to deploy to, and click Run Workflow.
All the keys required to deploy the infrastructure are stored in the repository’s Actions Secrets. However, if GitHub is down and I need to deploy urgently, I can execute the same script via PowerShell from my laptop. It’s a bit more cumbersome because I must add all of the secrets via command-line arguments, but it gets the job done.
In addition to point-and-click infrastructure deployment, I also automate deployment and teardown of the QA environment daily. This is both to save cost (so it’s only running during the day when I’m using it) and to ensure that I can “easily” spin up a new environment should Azure have a regional outage. If I were serving consumers rather than businesses, I would probably skip the QA environment altogether until my business was bigger.
Deploying Code
Code is also deployed via GitHub Actions. The process is simple:
- A push to the
develop
branch triggers a release build and the creation of a tagged release, with deployment packages as assets. This release is marked as a pre-release. - One of the triggers of the
Code – Deploy QA
workflow is the creation of a new release, so the new release is automatically deployed to QA. The workflow itself invokes a series of PowerShell scripts that download the release, unpack it and deploy it using the Azure Az PowerShell module. This is successful only after a status endpoint that performs several health checks reports that everything is up and running. - Once I’ve smoke-tested the release manually by navigating around the QA environment in my browser, I merge
develop
intomain
. This triggers theCode – Deploy Prod
workflow. This workflow promotes the pre-release and deploys it to production. One additional step here is to first deploy to a staging slot, verify the slot’s status endpoint, and then promote the slot to production.
I’m happy with this setup overall. It’s extremely easy to rollback a bad release (database migrations notwithstanding), because each of the workflows accept a manual trigger with the release tag name as an optional parameter. I can also use the Azure portal to swap slots with the previous deployment almost instantaneously. There is a lot of flexibility here that doesn’t require waiting for a revert commit to be built, tested and finally deployed.
The main downside is how long it takes to get the initial pre-release created in the first place: about 20 minutes. Promoting a release to production takes about 2 ½ minutes, in comparison.
How Much it Costs
I alluded to it above, but this environment is expensive, at least for the scale that I’m at and the service being delivered. I’m aware that I could run a virtual private server somewhere with Postgres and Nginx for basically $0. Even so, here are my most recent invoice numbers:
- Development: $45.06 This includes $45 for my Visual Studio Professional subscription, so it’s really $0.06 in actual hosting.
- Test: $0.26
- QA: $62.90 This breaks down to about $5 for SQL Server and $57 for App Service instances; everything else is a rounding error.
- Production: $293.00 My next invoice will be at least $60 cheaper because I’ve canceled a couple of addon services that are not necessary. Again, the bulk of the cost here is in App Service instances: about $150. Another $50 for SignalR and $35 for SQL Server.
At my scale, $400 per month matters. It is my biggest single operational expense. But it doesn’t move the needle on the length of my runway, which is all that really matters. Given that, I’m happy with the tradeoffs as they currently stand.
Takeaways and Thoughts for the Future
Assetbots is made up of some solid engineering. I’m proud of the architecture and the product that architecture enables. Even so, I might make different choices if I were presented with a clean slate to build it from. With 20/20 hindsight, I would:
- Make my builds faster by forcing myself onto esbuild from the beginning, even if that makes delivering some capabilities harder (like web workers and service workers, Hot Module Replacement, and so on).
- Make development faster by sharing more code between client and server – for example, running my API endpoints on a platform like deno.
- Make my deployment cheaper by using multiple hosting options for multiple needs – for example Vercel for the marketing site, a container for the MVC bits, and deno for the API endpoints.
Then again, the above would be faster and cheaper, but also more complicated. It might be worth it, but who knows?
As I march forward onboarding customers and searching for product-market-fit, my main concern continues to be speed of feature delivery. For feature delivery, my main bottleneck continues to be how fast my brain can ideate and execute. Until that changes or I run out of runway, I plan to keep things largely as they are, making only incremental improvements.
Thanks for reading, and please stay in touch if you’d like to follow along as I bootstrap my business by simply building a better mousetrap.
* The other ½ person is my wife, who helps with pretty much everything while working full-time and doing more than her fair share of raising our two daughters.
Top comments (0)