DEV Community

Aimiten
Aimiten

Posted on

22 Queries, Position 2, Zero Clicks — Because the Meta Description Said 2024

22 queries. Positions ranging from 1 to 5. 56 impressions. Zero clicks.

Not "near zero." Not "a handful." Zero. A blog post on valuefy.app sat at the top of Google's results for more than twenty search queries about SaaS valuation multiples in 2026, and not one searcher clicked it.

I went looking for why. What I found was a metadata bug so obvious I'm still annoyed I didn't catch it sooner — and a bigger pattern about what happens when automation runs on a schedule instead of on a signal.

The setup

This week's git log tells one story. Six pull requests — six venture-finance calculator pages improved by the daily Claude Code routine:

  • IRR Calculator (April 17)
  • Cap Table Calculator (April 16)
  • Dilution Calculator (April 15)
  • Vesting Calculator (April 15)
  • Funding Calculator (April 13)
  • Churn Rate Calculator (April 12)

Each commit is the same shape: verified benchmarks added, worked examples, updated internal links. Six pages touched in seven days. If you squinted at the output, you'd assume the project had a productive week.

GSC told a different story.

What the data showed

I pulled the 28-day page+query breakdown — which pages were getting impressions and for which specific queries. None of the six improved calculator pages appeared with any meaningful signal. The IRR calculator had 15 visible queries, all at positions 60–100, zero clicks. The Cap Table calculator returned zero rows — no impressions at all in the last 28 days.

But one page kept showing up in the data. A blog post:

/blog/saas-valuation-multiples-in-2026-why-profitability-now-trumps-growth-at-all-costs

Query Impressions Position
saas valuation multiples 2026 arr 9 4.2
saas company valuation multiples 2026 8 2.5
b2b saas valuation multiples 2026 6 5.0
saas startup valuation multiples 2026 5 3.0
saas arr valuation multiples 2026 4 3.0
saas valuation multiples arr 2026 3 3.7
saas valuation multiples compression 2026 2 8.0
rule of 40 saas valuation impact 2026 2 7.0

22 queries total. 56 visible impressions. The post is ranking at positions 1–5 for real queries with real search intent — not the micro-volume graveyard from last week's post. These are people actually typing "saas valuation multiples 2026" into Google and seeing valuefy.app in positions 2, 3, 4.

Clicks: 0.

Finding #1: The meta description says 2024

I curled the page as Googlebot. The title tag was fine:

<title>SaaS valuation multiples in 2026: why profitability now trumps growth-at-all-costs | Valuefy Blog</title>
Enter fullscreen mode Exit fullscreen mode

Then I looked at the meta description:

<meta name="description" content="Discover how a focus on profitability is reshaping 
SaaS valuation multiples in 2024. This case study explores how a SaaS company 
achieved a premium e..." data-rh="true">
Enter fullscreen mode Exit fullscreen mode

2024.

The title says 2026. The queries say 2026. The searcher's intent is 2026. But the meta description — the two lines that appear directly under the title in Google's results — says "reshaping SaaS valuation multiples in 2024."

The blog automation wrote this post with a 2026-targeted title and H1 but generated the meta description from case study content that referenced 2024 data. Nobody caught the mismatch. The description is also truncated — it ends with "e..." cut mid-word — which is a separate readability problem on top of the year contradiction.

A searcher types "saas valuation multiples 2026." They see a title that matches. They read the snippet. It says "2024." They click something else.

That's the entire explanation for 56 impressions and zero clicks at position 2.5.

Finding #2: A structural bug lives in every blog post

While I was in the HTML, I checked the H1 count. Curling the SaaS blog post and counting <h1> opening tags:

$ curl ... | grep -oE '<h1[^>]*>' | wc -l
2
Enter fullscreen mode Exit fullscreen mode

Two H1 tags. I checked the business valuation formula post — same result, 2. I checked the IRR calculator tool page — 1. The double H1 lives in the blog template, not in tool pages.

A page with two H1s sends mixed signals about what the primary topic is. It's not a crisis-level bug, but it compounds the other issues. If the meta description is wrong and the page has two H1s, Google has fewer reliable signals for what the page is actually about — and it's already chosen not to send clicks even from positions where it's ranking.

Finding #3: The og:title bug from week one is still there

I reported a duplicate og:title bug in the first post in this series. The static index.html bakes in a generic site-wide og:title, and React Helmet adds a page-specific one on top. Two competing tags, every page.

It's still there. I curled the homepage and the IRR calculator:

# Homepage
$ curl ... | grep -cE '<meta property="og:title"'
2

# IRR calculator
$ curl ... | grep -cE '<meta property="og:title"'
2
Enter fullscreen mode Exit fullscreen mode

This isn't new information. But six weeks is a long time for a 15-minute fix to sit in a backlog.

Finding #4: The automation improved pages ranked nowhere, ignored the one with actual signal

This is the pattern that bothers me most.

The daily routine picked six pages to improve this week. All startup-finance tools — IRR, Cap Table, Dilution, Vesting, Funding, Churn Rate. The routine added verified benchmarks and worked examples to each one. That content is now live. None of those pages appear in the 28-day GSC data with any meaningful traction.

Meanwhile, the SaaS valuation blog post was sitting at position 2.5 for 8 impressions, position 3.0 for 5 more impressions, position 4.2 for 9 more — and its meta description had a year error in it that anyone would have caught in 30 seconds.

The automation doesn't read GSC. It has a queue of pages to improve and it works through them in order. Whether a page has 1000 impressions or zero impressions doesn't affect which one gets picked next. The blog post with the broken metadata was never on the list, because the list is driven by a schedule, not by what's actually losing clicks right now.

A human checking GSC once a week would have caught the year mismatch before the post published. The routine — running every night, producing clean commit messages, improving content by every objective measure — never looked at the output.

What I'm going to do about it

  1. Fix the meta description on the SaaS blog post — change "2024" to "2026," fix the truncation. Five minutes. This is the highest-leverage fix available right now because the post is already ranking.
  2. Fix the double H1 in the blog template — this touches every blog post, but it's a one-file change.
  3. Strip the static og:title from index.html — this was overdue in week one. It's more overdue now.
  4. Add a GSC signal check to the automation queue — before choosing the next page to improve, pull its 28-day impression count from GSC. Pages with zero impressions should wait. Pages with impressions but poor CTR should go first.
  5. Audit the remaining blog posts for year mismatches — the automation publishes frequently. If one post has a wrong year in the description, others probably do too.

The uncomfortable lesson

Automation is good at doing the thing it's programmed to do. It's not good at noticing when it's doing the wrong thing.

The calendar routine delivered this week. Six pages improved, six clean commits, six sets of verified benchmarks added. From a process perspective it looks like progress. From a signal perspective, six pages were polished that Google hasn't surfaced to anyone — while a page Google was actively showing to searchers at positions 1–5 had a metadata mismatch that made every one of those impressions worthless.

The fix for the year bug takes five minutes. The fix for the scheduling problem — teaching the routine to check whether a page has signal before picking it — is a morning of work. Neither fix is hard. Both required a human to look at GSC data and connect it to what the automation was doing.

Running a content routine without a signal-feedback loop is the same bet as optimizing a conversion funnel without looking at where users drop off. You can improve every step in isolation and still move nothing.

I'll fix the meta description today. The og:title fix and the blog template H1 are going in this week. The queue-prioritization change — routing automation effort toward pages that already have impressions — is the one I want to report back on in a month. Either the click data improves or it doesn't, and I'll say which.


Running these experiments on valuefy.app and writing up what I find. If you're building programmatic SEO tooling, debugging a content automation that's technically working but not converting — or just hitting the same "impressions up, clicks flat" wall — drop a comment. I want to hear what the same data looks like from other projects.

I also run AImiten, where we build AI tooling for companies. This side project is where the ideas get stress-tested.

Top comments (0)