DEV Community

Cover image for Tech Stack Lessons from scaling 20x in a year
Jonas Scholz
Jonas Scholz Subscriber

Posted on • Originally published at sliplane.io

Tech Stack Lessons from scaling 20x in a year

A year ago, I wrote about our tech stack and how it helped us run a lean cloud computing startup. Since then, we've scaled over 20x. That kind of growth is fun, but also breaks a lot of things and assumptions; and forces you to make hard choices, quickly :D

Here's what changed, what stayed the same, and what we learned along the way.

Tech Stack

What Stayed the Same

Some things just work. Our frontend is still Nuxt with Typescript and Tailwind (RIP). Our backend is still Go with Go-Gin. We still run on Hetzner bare-metal and use Firecracker for virtualization. Terraform still manages our infrastructure. Redis still handles caching. Crisp still powers customer support. AWS SES still sends our transactional emails.

If it ain't broke, don't fix it.

But plenty did break — or became too expensive to keep running the same way.

Observability: Axiom → Parseable

This was our biggest operational change. Last year I praised Axiom for logs. It was great, on the base plan. Until we scaled.

broke

As our traffic grew, so did our need for better tracing and more detailed logs. Our Axiom bill exploded past €1,000/month and kept climbing. At that point, you have to ask yourself: is this sustainable? Obviously not lol.

We migrated to Parseable, self-hosted on Kubernetes with Minio for S3-compatible storage, all running on bare-metal. The product still feels early, but the team is responsive and ships fixes fast when something breaks. Big shoutout to Anant and Deba!

Would I recommend it? If you can't trade boatloads of money for time, yes. Self-hosting observability is work, but at our "scale" (we are still tiny), it's worth it. We still use Grafana for dashboards and alerts, that hasn't changed (for now, the bill is starting to hurt).

Object Storage: Backblaze → IONOS/Hetzner

Last year we used Backblaze for blob storage. It was cheap and reliable. The problem wasn't technical, it was purely political and a question of positioning.

As we grew, so did the type of customers we attract. Enterprise customers, especially European ones, started pushing back on storing their data with US providers. GDPR compliance, data sovereignty, internal policies; the reasons varied, but the message was clear. No US providers! So our crusade to replace all US providers began with Backblaze.

We moved to IONOS and Hetzner for object storage. Are they as good as Backblaze? No, not even close. But they're European, they're (barely) good enough, and they satisfy our customers' requirements. Honestly, if you're not required to use them I wouldn't. It feels like we don't really have a choice here.

CDN: Cloudflare → Bunny

Same story as storage. Cloudflare is an incredible product with features we'll never use. But customers asked for a European alternative.

Bunny fits the bill. It's not feature-complete like Cloudflare, but it handles our CDN needs perfectly. It's fast, reasonably priced, and European. In that case this wasn't even a real tradeoff, Bunny does exactly what we need. For our super simple setup the migration took less than 2 hours.

CI/CD: GitHub Actions → Namespace

GitHub Actions served us well, but it's stagnated. We needed nested virtualization for testing Firecracker stuff. We needed better performance. GitHub wasn't delivering.

We moved to Namespace for our runners. It's a great product — also European, which is becoming a theme here. The performance improvements alone were worth the switch.

That said, we'll probably migrate to completely self-hosted runners eventually. The more we scale, the more control we want.

Data Persistence: The Big One

This was our most significant architectural change. Last year, I bragged about running everything in Postgres with Timescale, including hundreds of millions of analytics rows. That worked great until our database hit 2TB.

At 2TB, Postgres becomes hard to manage. Stupid queries can take down prod, scaling is painful. Database pros are going to laugh about me here, 2TB is probably nothing in the grand scheme of things! I am not a postgres pro, and honestly wasn't planning on becoming one. Additionally, the cost just started to hurt. Especially considering that we want to do another 20x in 2026.

So we built something simpler: hot data lives in Postgres, then gets flushed to S3 as Parquet files. For queries, we use DuckDB to read directly from S3. DuckDB is amazing.

The results surprised us. P99 latency actually improved. Why? Most queries are "give me the last 5 minutes of metrics" or "show me the last 500 logs." That's all hot data sitting in Postgres. Historical queries hit S3, and DuckDB handles Parquet files like a champ. Those are, if not cached, of course slightly slower.

This architecture saves money, scales better, and plays to our strengths. We understand S3. We don't understand running a 10TB Postgres cluster :D

The Pattern

Looking back at all these changes, there's a clear pattern:

  1. European everything. Customer pressure pushed us toward EU providers. Again, this isn't a technical decision. It's a business reality when you grow beyond startups and indie hackers.

  2. Self-host at scale. SaaS products are great until your bill crosses a threshold. Then you have to do the math on whether your time is cheaper than their prices.

  3. Simple beats clever. We didn't build a fancy distributed database. We flush data to S3 and query it with DuckDB. It's not sexy, but it works! (Actually I think the simplicity is quite sexy, but not great for resume-driven development)

What's Next

We'll probably self-host our CI runners soon. We're evaluating alternatives to AWS SES since, you know, European.

The stack will keep evolving. That's the nature of building infrastructure at scale. But the core philosophy stays the same: keep it simple, keep it maintainable, and only add complexity when the problem forces you to.


That's where we're at in 2026. Twenty times bigger, a few hard lessons learned, and a stack that's more European than ever.

Cheers,

Jonas

Top comments (7)

Collapse
 
xwero profile image
david duymelinck

Why not have multiple Postgres databases, the hot and cold one for a start? Over time the cold database could be split up by periods. That is what you are doing with the Parquet files, not?
Then the queries wil probably stay the same for the most part, there only needs to be a mechanism to connect to to the right database.

If you are used to handling Postgres files, it seems more risky to convert the data from Postgres to Parquet files, and run them in a database that is less familiar than Postgres.
It also introduced a risk of not fully transporting the Parquet file from S3 in case of a service outage.
Wouldn't suspending one or more cold database servers have a similar cost reduction?

Collapse
 
code42cate profile image
Jonas Scholz

Thanks for the comment, not 100% sure if I understand what youre saying though - the point is also that I pay a lot for storing cold data, fast disks are expensive. S3 is cheap.

The risk of not transporting doesnt really exist I think. I have good retry mechanisms for all error scenarios I could think of. Data is not deleted before I am 100% sure its stored in S3. Even partial failures are reconciled!

If you suspend database servers you still pay for the disks, which isnt exactly what I'd like to do!:D

Collapse
 
xwero profile image
david duymelinck

I assumed data was the main cost, but I wasn't sure.

When I made the server suspension remark, I didn't think you needed to keep paying for the storage. But it makes sense form a hosting company standpoint.

The thing I'm still wondering is why choose DuckDB over Postgres as the cold storage database? Even if you want to keep Parquet files, Postgres can import them too.
I assume not all S3 data will be on the cold storage database server, that would just move the cost of the storage from one server to another.
From the DuckDB documentation I get it works best with large datasets, so how is the performance for the loaded data compared with Postgres?

The main goal of my comment is just trying to understand what made you pick that setup.
So it is a mix of trying to gather my thoughts and having questions.

Collapse
 
shayy profile image
Shayan

Very nice.

Collapse
 
code42cate profile image
Jonas Scholz

Indeed sir.

Collapse
 
atakanozt profile image
atakanozt

Nice work!

Collapse
 
code42cate profile image
Jonas Scholz

Thanks!