DEV Community

Cover image for I Launched. Users Showed Up. And Then I Watched Them Get Lost.
HomelessCoder
HomelessCoder

Posted on • Edited on

I Launched. Users Showed Up. And Then I Watched Them Get Lost.

This is the original founder-side version of the story. From post 3 onward, the series was rewritten in a calmer, more technical style.

The first post in this series was about why Omnismith exists. This one is about what happened after I launched it.

The Parts Nobody Sees on the Commit Graph

I've been a software engineer for a long time. What I had never done was build a complete commercial product from zero — alone — with real money and real legal obligations on the line.

Some of what that actually involved:

Two full architectural iterations. The first codebase went in a direction I wasn't satisfied with. I scrapped it and started over with a modular monolith on my own framework, DDD, and hexagonal/use-case architecture. It was the right call — the backend genuinely became enjoyable to work in, not something to fight against.

Opening a legal entity while in immigration. I'm not an EU citizen. I opened a UK LTD — something I'd never done in my life, in a country that isn't mine, in a second language. It's surprisingly straightforward once you start, but it took mental energy I hadn't budgeted for.

Pricing based on actual economics. Not "what sounds reasonable" but walking through unit cost, comparable tools, where the free tier lines make sense, and why flat pricing over per-seat billing fits the developer audience I'm building for. That analysis took longer than I expected and I'm glad I didn't rush it.

A full DevOps pipeline, built and maintained by me. GitHub CI/CD, my own Kubernetes cluster, ArgoCD for GitOps deployments. I'm the infrastructure team. I'm also the backend. I'm also the frontend. I'm also the support desk.

Nobody helped me build any of this. I'm not complaining — it's genuinely something I'm proud of, even though bragging isn't really my thing. I'm writing it down because I think solo builders often undersell how much ground they're covering.

The First Registrations

After the soft launch, a small group of closed beta testers — people I actually know and trust — tried the early version and gave honest feedback. They caught small things I'd stopped seeing — a missing translation here, a confusing label there. They pointed at specific moments where they got confused. They told me the onboarding wasn't landing.

That feedback was more valuable than months of me staring at my own code. I'm genuinely grateful for it.

And then real users — people I don't know — started showing up. Each one felt way more significant than it probably should have. After years as a hired engineer building things for other people's products, seeing a real stranger's account appear in my system landed differently. It still does.

The Blank Canvas Problem

Here's what the beta feedback actually taught me.

Omnismith's v1 onboarding worked like this: register → step through a tutorial → land in an empty project with nothing in it — no templates, no attributes, no data.

I thought that was fine. The tutorial explained the concepts. The user had a clean slate to start.

What actually happened: users finished the tutorial and then stared at an empty screen, having memorized a sequence of UI steps but with no immediate sense of what to actually do with them. The mental model wasn't formed yet. And blank canvases are paralyzing — they're for people who already know what they want to paint.

v2 is different. Now when you register, you don't get an empty project. You get a demo project pre-populated with a real dataset: a product catalog.

Omnismith demo project — product catalog with categories, brands, and products

It has 3 templates — Category, Brand, and Product — with 11 attributes across them. Some attributes are shared across templates (Name, Description). The Product template uses References to link each product to its Category and Brand, and even has a self-referencing "Related Product" link. There are 19 sample entities: 4 categories (Electronics, Furniture, Apparel, Books), 5 brands, and 10 products with real prices, stock status, release dates, and tags.

Omnismith v2 onboarding — interactive tour with contextual pointers

It's not a lot. But it's enough to instantly understand the shape of the platform. There's something to click on. There's data to edit. The templates tell you "this is how structure works." The entities tell you "this is what your data looks like."

On top of that, I added an interactive tour — something in the spirit of Intro.js — that walks you through the actual UI with contextual pointers. Not a wall of text in a help doc, but a live guide that highlights the element you're looking at right now. That combination — real data you can touch plus a tour that shows you where to look — landed much better than v1's tutorial-then-empty-canvas approach.

My beta testers stopped asking "what do I do now?" Which was the only metric that mattered.

The Lesson That Should Have Been Obvious

The blank canvas problem is not new. Every tool with a new user flow has solved or failed to solve it. But when it's your tool, your mental model of it is so complete that you forget how much context you're carrying that your user doesn't have yet.

You see a blank project and feel possibilities. Your user sees nothing.

Show them a story they can edit. Don't make them write the opening chapter.

That was probably the most valuable lesson of my early founder days. Not a technical insight. Not an architecture decision. A UX realization delivered by real people being kind enough to tell me where they got stuck instead of just quietly leaving.

If you're building a tool and your new user flow ends with "and now they have a blank canvas" — reconsider that.


👉 omnismith.io — log in and see the demo project yourself
👉 app.omnismith.io — free tier, no credit card
👉 docs.omnismith.io — now actually worth reading


Part of the "I'm Building Omnismith in Public" series. Previous: I Built This Because I Was Tired of Building Admin Panels.

Top comments (2)

Collapse
 
maxxmini profile image
MaxxMini

The blank canvas problem resonated hard. I built a browser-only finance planning tool (React + IndexedDB, zero backend) and made the exact same mistake — users landed on an empty dashboard with budget categories to fill and just... froze.

The fix was similar to yours: a pre-populated demo dataset that loads via URL parameter. Instead of an empty budget grid, users immediately see someone who earns X, spends Y, and is 12% over in dining. The plan-vs-actual comparison clicks in seconds because there's real context to read.

Your framing of "show them a story they can edit" is the most concise way I've heard it put. In my case the demo literally tells a financial story — users replace demo numbers with their own, and by the time they've edited three categories, they've internalized the whole model without a tutorial.

Question: after switching to the demo project approach, did you measure time-to-first-meaningful-action (like creating a custom template)? Or was the signal mostly qualitative from beta testers? Curious because measuring onboarding funnels in tools without heavy analytics is always the tricky part.

Also — running your own K8s cluster as a solo founder is no joke. Do you find the infrastructure overhead eats into UX iteration time, or has the GitOps setup made deploys fast enough that it doesn't matter?

Collapse
 
homeless-coder profile image
HomelessCoder

Thanks for sharing! I love that we landed on the exact same solution. I wish I’d known about this sooner, but it’s been a valuable lesson. Do you happen to have any favorite UX resources for those of us who haven't formally studied it? :)

To answer your questions:

On Metrics:
I don't have trackers or analytics integrated yet, though it's definitely on the roadmap. Thanks for the nudge - I should probably prioritize that soon, though I want to be careful not to compromise performance or privacy. I'm considering a hybrid or background solution that derives insights from the data itself rather than heavy client-side tracking.

On K8s and Infrastructure:
The cluster actually started as a personal experiment. I needed to learn K8s for a previous job, and rather than "testing in production" on my employer's infrastructure, I built my own. My background in Linux sysadmin work and networking definitely made the learning curve manageable.

I’m a big fan of building things from scratch - it provides a much deeper understanding of the underlying processes. The payoff has been worth it: I just commit to the repo, the tests run, the image builds, and ArgoCD triggers the rollout automatically. Plus, the cost savings compared to AWS or GCP are massive for a solo founder!

On your IndexedDB post:
I actually just read your piece on choosing IndexedDB. I totally get the appeal of zero-backend, but the trade-offs you mentioned. Especially the lack of cross-device sync and the risk of browser data deletion - exact features provided by my platform, including a migration-less flexible schema.

If you ever decide to move past those IndexedDB limitations without losing that "developer-first" feel, I’d love to have you try my platform. It might be the middle ground you’re looking for to solve the sync and persistence issues!