From 0 to 30 indexed pages in 3 weeks — what actually moved the needle
I'm building Convertify — a free image converter that supports 20+ formats with no signup, no limits, and no file tracking. Solo project, built with Rust + libvips on the backend and Next.js on the frontend.
Three weeks ago Google had indexed exactly 0 pages. Here's what I changed and what actually worked.
The starting point: a Vite SPA that Google couldn't read
The original version was a classic React SPA bundled with Vite. Fast to build, great DX — and completely invisible to Google. Googlebot would hit the page, get an empty HTML shell, and move on. No content to index.
The fix was straightforward in theory: migrate to Next.js App Router with SSG. In practice it took a few days, but the result was 186 fully rendered static pages generated at build time — one for every format combination the tool supports.
That alone got me from 0 to around 16 indexed pages in the first two weeks. But then growth stalled.
What actually moved the needle in week 3
1. Schema markup on every page
I added FAQPage + HowTo JSON-LD schema to every converter page. Not just one or two — all 186.
The FAQPage schema answers the questions people actually ask: "What is HEIC?", "Will my photos lose quality?", "How long does conversion take?". The HowTo schema describes the conversion steps in structured format Google can parse.
Result: impressions jumped, and GSC started showing the pages in rich result eligibility.
2. Content expansion from ~500 to 5000–7000 characters
This was the most time-consuming part. Each landing page went from a thin paragraph to a full article: format comparison tables, browser support matrices, compression benchmarks, use case explanations, and a 6–8 question FAQ.
Thin content is invisible. Google doesn't rank pages that don't say anything useful. The pages that got indexed first were the ones with the most content.
3. Internal linking cluster (RelatedConversions component)
I built a RelatedConversions component that appears on every converter page and links to 8–10 related conversions. For example, /heic-to-jpg links to /heic-to-png, /jpg-to-webp, /png-to-jpg, and so on.
This does two things: it passes PageRank between pages in the same cluster, and it gives Googlebot a clear crawl path through the entire site. Before this, pages were islands. After this, they're a connected graph.
4. About page with E-E-A-T signals
Google's helpful content guidelines care about who is writing the content. I added an About page with Person schema (name, job title, links to dev.to articles) and WebSite schema with a sitelinks search box.
It's not a magic ranking factor, but it signals to Google that there's a real person behind the site — not a content farm.
Results after 3 weeks
| Metric | Week 0 | Week 3 |
|---|---|---|
| Indexed pages | 0 | 30 |
| GSC impressions (7d) | 0 | 160 |
| Unique search queries | 0 | 45 |
| PageSpeed (mobile) | — | 100/100 |
30 indexed out of 186 total means Google is still crawling. The indexing curve is moving in the right direction — pages that got schema + expanded content are getting indexed first.
Positions are averaging around 55, which means page 5–6. Clicks come when you hit the top 10. That's the next phase.
What didn't work (or not yet)
Backlinks — I have ~47 linking domains in Ahrefs, but most are toxic spam (pharma sites, fake testimonial networks). Real quality backlinks: about 3–5. This is the biggest gap right now.
Reddit — posted in r/webdev and r/juststart, both got auto-removed. New accounts trigger spam filters regardless of content quality. Building karma through comments first before trying posts again.
Impressions without clicks — 160 impressions at position 55 means the pages are being seen but aren't competitive yet. More content depth and backlinks should push positions into the top 20, where CTR starts to matter.
What's next
- Push
/heic-to-jpgand/avif-to-jpgto 1000+ words with full schema (these are the highest-volume queries) - Get HN karma to 10+ and do a Show HN post
- Outreach to "best free image converters 2026" listicle authors for genuine backlinks
- Track whether internal linking cluster improves crawl coverage in GSC
Stack
- Backend: Rust + Axum + libvips (image processing)
- Frontend: Next.js 16 App Router, SSG
- Database: PostgreSQL (landing page content)
- Infrastructure: VPS + Caddy + PM2
The site: convertifyapp.net — no signup, no limits, 20+ formats.
Top comments (7)
This mirrors my experience almost exactly, but at a much larger scale. I run an Astro-based financial data site with 89K+ pages across 12 languages, and the content expansion + internal linking combination is what moved the needle for us too.
A few data points from our journey that might be useful:
We went from position ~52 (3mo avg) to 12.1 (7d avg) over the last month. The pages that rank best are the ones with 600-800 word unique analysis, not the thin listing pages.
Internal linking was massive. We added 'Related Stocks' and 'Popular in Sector' widgets that cross-link stock pages to sector pages to ETF pages. Same principle as your RelatedConversions — turning islands into a connected graph.
On the schema markup front: we use Corporation + FinancialProduct + BreadcrumbList + JSON-LD on every page type. GSC started showing rich result eligibility within days.
The positions-at-55 problem you mention is real. Our 3-month average is still ~50, but the 7-day window shows 12.1 — which tells me that recently-improved pages rank dramatically better. Focus your content depth on the highest-impression pages first and you'll see that 7-day average drop fast.
One thing that helped us with the backlink problem: writing about the building process itself (like you're doing with this post). Technical SEO case studies get natural links from other builders.
Thank you for answer. For me it very important, I beginer in seo world. The 7-day vs 3-month average framing is really useful — I've been looking at the overall average and getting discouraged, but the recently-improved pages metric makes more sense as a leading indicator. Will start tracking that split in GSC.
The 600-800 word unique analysis point is interesting — I assumed longer was always better, but you're saying it's the quality of the unique insight, not raw length. That changes how I'll approach the next batch of pages.
One question: at your scale (89K pages, 12 languages), how do you decide which pages to prioritize for content depth? Do you go by impression volume in GSC, or is there another signal you use to identify which thin pages have the most upside?
Great question! For prioritization I use a layered approach:
GSC impressions first — if Google is already showing a page in search results but not getting clicks, that's the highest-ROI page to improve. Even going from position 15 to position 8 can 10x your click-through rate.
High-volume tickers/topics — for a financial site, blue chips (AAPL, TSLA, MSFT) get searched way more than micro-caps, so those get content depth priority.
Pages that already rank in one search engine but not others — I found some pages ranking well on Bing but invisible to Google. Those are worth investing in because the content clearly has value, it just needs quality signals to cross the threshold.
The 7-day window in GSC is your best friend for this. The 3-month average hides momentum — you might have pages that went from position 50 to position 20 in the last week, and those are exactly the ones to double down on.
Don't try to thicken everything at once. Pick your top 50 pages by impressions and make those genuinely useful first. The rest can wait.
Thank you for answer!
The RelatedConversions component is the highest-leverage thing you describe, and I'd put it ahead of the schema markup in long-term impact. What it's doing is turning your 186 pages from isolated nodes into a coherent topic graph — which is exactly how crawl efficiency and PageRank flow through a site at scale.
A few things that might sharpen that component:
Be intentional about link directionality. For /heic-to-jpg, you want to link to /heic-to-png and /jpg-to-webp because they share a concept (input format, or output format), not just because they're alphabetically adjacent. A simple scoring approach: weight tag/format overlap at 70% and title keyword similarity at 30%. This ensures the 8-10 links you surface per page are semantically meaningful rather than random neighbors.
The cluster structure you're building is also the right pattern for Shopify merchants who run blogs alongside a product catalog. The default Shopify blog has no related posts logic — it just shows recent posts, which pulls in completely unrelated content and destroys the topical cluster effect you're describing. That's a problem we specifically built Better Related Blog Posts (apps.shopify.com/better-related-blog-posts) to solve — it scores posts by tag overlap and keyword similarity so related links reflect semantic proximity, not post date.
On the positions-at-55 problem: the cluster structure you've built will help, but content depth on the highest-volume pages (/heic-to-jpg in particular) will do more. Google's own guidance on helpful content explicitly rewards pages that cover a topic more completely than the alternatives, not just longer. The format comparison tables and browser support matrices you added are the right move — those are hard for thin-content competitors to replicate quickly.
The directionality point on RelatedConversions is exactly what I was missing. Right now I'm linking based on format overlap (same input or same output), but the 70/30 scoring between format overlap and keyword similarity is a cleaner framework — I'll implement that this week.
On positions-at-55: agreed, content depth on /heic-to-jpg is the next priority. The comparison tables are there but the page still needs more "complete coverage" signals. Curious — did you find that adding browser support matrices moved the needle for you, or was it more the FAQ depth?
Thank you for answer. If you can write more few tips for beginners. I'll be grateful