Running a single product, you get a steady line of charts: traffic, signups, conversion. You tune each dial and hope the curve bends. Running a portfolio of fourteen products, a different picture emerges. The patterns that were invisible in one product stand out like blinking lights once you can compare them against each other.
This is a building-in-public note about the unfair advantage that comes from measuring many products side by side, with shared GA4 + Clarity instrumentation and one place to look at the data.
The setup
We run a portfolio at Inithouse: roughly fourteen small products at different lifecycle stages. Some are days old, others have months of real traffic. All of them ship the same analytics baseline:
- GA4 on every domain, with the same event schema
- Microsoft Clarity on every domain, with the same session triggers
- A shared Looker Studio workspace where every property is a drop-down away
The payoff is not faster dashboards. The payoff is being able to ask one question across the whole portfolio and see how the answer changes with product age, category, and traffic source.
Pattern 1: the "first 48 hours" scroll signal
One of the cleanest cross-portfolio patterns we see is on new products. For every product we launch, Clarity's scroll-depth distribution in the first 48 hours is weirdly predictive of what that product's bounce rate will look like three months later.
Products where the 75th percentile scrolled past the first CTA on day one (our Czech vibe-coding community Vibe Codéři was a textbook case) tend to keep a healthy return-visit cohort. Products where the 75th percentile bailed before the fold (early iterations of Pet Imagination) needed a rebuild of the hero section before anything else moved.
A single-product founder gets this data too. But they see one number. Without peer products to anchor against, "60% scroll past fold" is just a number. With a portfolio, that same 60% is either a red flag (our stable baseline is 78%) or a green light (our new-product baseline is 45%).
Pattern 2: event mapping drift
Here's one that only shows up when you audit many properties at once: event names drift.
We standardized an event set early: cta_click, form_submit, generator_run, pricing_view. When one developer ships, they use the canon. When another ships, they sometimes improvise: cta_clicked, form_submitted, run_generator. In a single GA4 property you notice nothing. The chart still goes up. In a portfolio view, one property is suddenly missing from a funnel comparison and the cause is almost always event drift.
We now run a weekly job that lists the top 10 events per property and flags anything that doesn't match the canon. It takes 30 seconds of attention per week and has saved us at least two incidents where a product looked dead in analytics and was actually converting fine, just under a renamed event.
If you're building a portfolio, put the canonical event list in a YAML file next to your deploy config. Future you will send a thank-you note.
Pattern 3: the Clarity "rage-click" correlation that wasn't
We had a belief for months that rage-clicks correlated with churn. Clarity surfaces them on every session recording, and it's intuitive: angry user, bad product, user leaves. We even had a threshold in our head.
Running the same analysis across the portfolio broke the belief. On Audit Vibe Coding, high rage-click rate correlated with higher engagement, because the product is interactive and users were spamming the "re-run" button on purpose. On Magical Song, rage-clicks meant exactly what we thought. On a third product, the rage-click signal was dominated by a single broken element in the nav.
The same metric meant three different things across three products. A single-product founder would have picked one interpretation and been either dangerously right or confidently wrong. The portfolio forces you to unlearn the metric as a universal signal and treat it as something that needs per-product context.
Pattern 4: traffic source tells you which product you actually have
One of the most humbling cross-portfolio insights: the traffic source mix tells you what kind of product you built, regardless of what you thought you were building.
We have products we pitched as "tools for professionals" that turn out to get most of their traffic from casual search intent. We have products we launched as "casual consumer apps" that quietly rank for long-tail professional queries and have a tiny, high-intent conversion cohort. Watching Agents is a live example: launched as a general-audience prediction platform, but the organic traffic that actually sticks is coming from research and analyst queries.
If you run one product, you optimize toward whichever audience you originally imagined. If you run a portfolio, you learn early that audience is discovered, not chosen, and you start following the analytics instead of leading them.
What I would tell a portfolio-curious founder
Three things, based on a year of watching this data roll in:
One: standardize your analytics baseline before you have anything to measure. Same GA4 property setup, same event names, same Clarity config. The cost when the portfolio is small is invisible. The cost when the portfolio grows and you have to retroactively unify eight different schemas is painful.
Two: look at the same dashboard view across every product every week, even the dead ones. The patterns you need to see live in the differences between products, not in any one chart.
Three: treat your weakest-performing product as a control, not a failure. It tells you what "normal bad" looks like, which is the baseline you need to recognize when something on a stronger product quietly goes wrong.
A portfolio is not just a hedge against one product failing. It's a continuous calibration instrument for every other product you run. Once you get used to reading it that way, going back to single-product tunnel vision feels like flying with one eye closed.
Building Inithouse in public: 14 products, one analytics stack, one weekly review. More portfolio notes to come.
Top comments (0)