DEV Community

Olivia Craft
Olivia Craft

Posted on

I Published 44 Cursor Rules Articles in 22 Days — Here Is What I Learned

I Published 44 Cursor Rules Articles in 22 Days — Here Is What I Learned

I'm going to skip the hook. You read the title.

44 articles. 22 days. That averages out to two a day, every day, for three weeks. Most of them about cursor rules — .cursorrules files, .cursor/rules/*.mdc, the little system prompts that turn Cursor and Claude Code from "intern who just discovered the codebase" into something close to a useful collaborator.

This post is the retrospective. It is not a brag. The numbers I'm about to share are small enough that bragging would be embarrassing.

Here are the honest numbers after 22 days:

  • 44 articles published on dev.to
  • 1 reaction total (one thumbs up — thank you, whoever you are)
  • 0 comments
  • 7 followers on Twitter (started from zero)
  • 200+ tweets posted
  • 0 sales on the Gumroad product
  • $0 revenue

If you came for a "how I made $10k in a month" post, close the tab. If you came for what actually happens when a new account ships hard for three weeks, stay.

Why I started

Every AI assistant I've used — Cursor, Claude Code, Copilot, the whole stack — generates code that runs and violates every convention that matters.

A few examples I kept seeing on real projects:

  • Python functions returning bare dict with no type hints, no TypedDict, no dataclass. Just a vibes-shaped object.
  • React components with anonymous arrow functions inline in every onClick, re-rendering the world on every parent update.
  • Go code that ignored context.Context propagation and swallowed errors with _ = err.
  • Java services with try { } catch (Exception e) { e.printStackTrace(); } everywhere — the cargo-cult version of error handling.
  • TypeScript any everywhere because the AI couldn't be bothered to figure out the actual type.

None of this is the AI being broken. It's the AI doing exactly what it was asked to do: produce code that matches the surface pattern of the prompt.

The fix is not better prompting. You will not type your way out of this. The fix is rules — persistent, project-scoped instructions that the model reads on every turn. .cursorrules for Cursor. CLAUDE.md for Claude Code. The same idea: stop telling the AI what you want this time. Start telling it what you always want.

That was the thesis. That's what I wanted to write about.

The content strategy

I picked a stupid-simple format and stuck to it. One language or framework per article. 6–8 rules. Each rule with:

  1. The rule itself, written as a single block you can paste straight into .cursorrules.
  2. A "bad" code sample — what the AI generates without the rule.
  3. A "good" code sample — what the AI generates with the rule.
  4. One or two sentences explaining the failure mode the rule prevents.

That's it. No "let's first define what cursor rules are" intro paragraphs. No "in conclusion, AI is here to stay." The reader either knows what Cursor is or they don't, and if they don't, my article is not the one for them.

I covered: Python, TypeScript, Go, Rust, Java, Kotlin, C#, PHP, Ruby, Lua, Bash, Docker, React, Vue, Angular, Next.js, Svelte, Solid, Astro, FastAPI, Django, Flask, Spring Boot, Laravel, Rails, Express, NestJS, and a long tail I'm forgetting.

A few were "complete guide" longer-form pieces (8–10k words). Most were focused 2–4k word pieces on a specific stack.

What I learned writing this much, this fast:

The bad/good code pattern is the only thing that matters. Readers do not want abstract rules. They want to see what their AI is doing wrong next to what it should do. If your example isn't visually different in 2 seconds of scanning, the rule is too vague.

Real failure modes beat clever prose. The strongest articles were grounded in things I'd actually seen break on real codebases. The weakest were the ones reaching for filler rules to hit the 6-rule format.

Length is a liability past 2,500 words. The "complete guide" 8k-word pieces took 4–5x as long to write as the focused ones. I have no evidence anyone reads past the third heading.

What I learned about dev.to

44 articles. 1 reaction. 0 comments. Let's talk about what that actually means.

It does not mean the articles are bad. It also does not mean they are good. It means dev.to does not give organic reach to brand-new accounts publishing into competitive tags.

The dev.to home feed is mostly driven by the #discuss and #beginners tags, plus whatever the moderators boost. Technical pieces in #cursor, #ai, #productivity get rotated through the tag feed for a few hours and then disappear into the backlog. There is no algorithmic resurfacing. There is no "you might also like." There is the firehose, and then there is the archive, and the gap between them is roughly 12 hours.

If you don't have an existing audience that follows you to dev.to, your first 50–100 articles will feel like shouting into a server room. The platform is friendly, the editor is good, the markdown is clean — but the network effect is not there for new accounts.

This is fine. I knew this going in. The bet was never "dev.to traffic." The bet was: indexed, public, technical content that ranks on Google for long-tail searches.

More on that below.

What worked on Twitter (sort of)

7 followers. 200+ tweets. No viral moment. No retweets from anyone with a real account.

Things I learned:

  • Posting code screenshots beats posting links every time. A side-by-side bad/good code image gets 3–5x the impressions of a tweet with a dev.to link. The platform punishes outbound links — this is well documented and I confirmed it the hard way.
  • Threads are not a magic bullet for new accounts. If your first tweet doesn't get engagement, the rest of the thread doesn't get shown. I wrote ~20 threads. None of them traveled.
  • Replying to bigger accounts beats posting your own content. Actual contributions to actual conversations. I got a handful of profile clicks. None converted.
  • Polls work as engagement primers but don't drive follows. A "which language next?" poll got 30+ votes and zero followers. Useful as a research signal, useless as a growth tactic.

The honest truth: 22 days is not long enough to draw conclusions about Twitter growth. The benchmark accounts took 12–18 months of daily posting before escape velocity. I am 5% into that timeline.

The SEO bet

This is the actual thesis, and I am going to be unapologetic about it.

Most of these articles target long-tail keywords like:

  • cursor rules for python
  • cursor rules for typescript
  • cursor rules for go
  • cursor rules for nextjs

These are queries that did not exist in meaningful volume 18 months ago. They exist now. They will exist in much larger volume in 6 months as Cursor adoption keeps compounding. The total addressable search volume for "cursor rules for $LANGUAGE" is going to grow by an order of magnitude in the next year, and there is currently almost nobody writing technical depth content for those queries.

The pieces I'm betting on:

  • They have the keyword in the title, the first paragraph, and the URL.
  • They include real code, not generic listicle content. Google's helpful content updates favor this.
  • They are on a high-authority domain (dev.to has DA 90+) which dramatically reduces the time-to-rank.
  • They are technical enough that the bounce rate from accidental visitors will be low.

My guess: 6–8 weeks before the first articles show up in the top 20 for target queries. 12–16 weeks before any break the top 5. I will know by mid-summer. If the thesis is wrong, I will publish the post-mortem.

What I would do differently

If I were starting over on day zero, knowing what I know now:

Distribution before content. I had this backwards. I spent the first 14 days writing and the last 8 days realizing nobody was reading. The right move would have been: spend the first 5 days building a small audience (Twitter replies, Hacker News comments, Reddit answers in r/cursor and r/ChatGPTCoding), and only then start publishing. Audience first, content second. I did it in the wrong order.

One newsletter, not 44 dev.to posts. A free Substack with 200 subscribers would have produced more sales than 44 dev.to articles with 1 reaction. The economics of email vs. blog traffic for a paid product are not even close. I will be starting one this week.

Pick three platforms, not seven. I posted to dev.to, Twitter, Mastodon, Bluesky, LinkedIn, Hashnode, and Medium. Spread too thin. Two got real attention; the rest got copy-paste reposts that performed exactly as you'd expect.

Build the product first, write the content as documentation. I built the Gumroad pack in parallel with the articles, so the articles couldn't link to it as the canonical reference. If I'd shipped on day 1, every article could have ended with the same CTA. Instead I retrofitted it into half of them.

The product

The Cursor Rules Pack is the thing I am actually selling.

It exists because I got tired of writing the same .cursorrules files from scratch on every project. The pack has:

  • 50+ rules across 12+ languages and frameworks
  • Each rule formatted as a copy-paste block
  • Bad/good code samples for every rule
  • Stack-specific bundles (Python web, React frontend, Go backend, etc.) you can drop into a project as a single file
  • Updates whenever I add a new language

Price is intentionally low. The point is for it to be a no-brainer for any developer who has spent 30+ minutes fighting Cursor output.

You can find it here: https://oliviacraftlat.gumroad.com/l/wyaeil

22 days of work. 0 sales so far. I am telling you this because the honest truth matters more than the social proof. The product is good. The distribution is the problem. I am working on the distribution.

What's next

I am going to keep publishing, but the cadence is changing. Two articles a day was unsustainable in terms of quality and unsustainable in terms of my own attention. The new plan:

  • Three high-quality articles per week (Mon/Wed/Fri).
  • One newsletter issue per week with the rules I added that week and one short essay.
  • Daily Twitter presence focused on replies and screenshots, not links.
  • Hacker News submissions for the "complete guide" pieces only, when they're genuinely the best resource on the topic.

If the SEO thesis pans out by week 8, I will keep doing this. If it doesn't, I will write the post-mortem and try something else. Either way I'll publish the data.

If you got this far

You probably either: (a) write a lot of code with AI assistance and want to make it less bad, or (b) you're a writer/builder thinking about doing something similar and wanted to know what the actual numbers look like.

For (a): the pack is here — https://oliviacraftlat.gumroad.com/l/wyaeil. It will save you the same evening I spent assembling rules from forum threads.

For (b): publish anyway. The articles are evergreen. The 22 days of work is not lost — it is sitting on a high-authority domain waiting for the search traffic to find it. The vanity metrics are zero now. The vanity metrics are not the point.

Either way: thanks for reading. If you have feedback, opinions, or want to roast me, my Twitter is @OliviaCraftLat and dev.to comments are wide open.

I will write the 60-day update when we get there.

Top comments (0)