As a React developer transitioning into Go, I was struggling to truly learn the concepts. But worse, I realized I'd been doing the same thing throughout grad school: gaming quizzes instead of studying.
The Problem with Passive Learning
I'd sit down to take a quiz on something I'd never properly studied, and often luck out. My strategy was unconscious but devastating: stop reading the questions and start pattern-matching the options. I'd look for the answer that "felt right" rather than the one that was right. Finding the odd one out was easy. Reading? Not happening.
That's when it hit me: I wasn't learning. I was cheating myself.
A Different Kind of Quiz
So I redesigned the entire quiz format. Not just the content, but the structure.
Instead of a question with one "right" answer that feels correct while others obviously don't, I flipped the script:
Go Mastery has one clearly wrong option surrounded by multiple options that all sound plausible.
Suddenly, you can't just skim for vibes. You have to read. You have to think. You have to understand the why, not just pick the outlier.
Making It Harder (On Purpose)
But even that wasn't enough. I wanted actual friction. The kind that forces genuine learning.
60% of the time, after you select an answer, the quiz pushes back: "Are you sure about your choice?"
You never know if it's because you're wrong or just unlucky. So you pause, reread, rethink. You don't get the answer handed to you on a golden platter.
And if you do fail? You get three more tries. Plus a mini lesson that actually explains the concept. This transforms failure from "you lost" to "here's what you missed."
The Deployment Question: When Good Enough Is Good Enough
After a few iterations, I hit a crossroads: "This works well. Do I have to deploy it?"
That one question opened up a cascade of design tradeoffs:
Option 1: Deploy with My API Key
Polished, standalone app. But the math gets scary fast.
According to Anthropic's pricing, a typical question generation on Sonnet runs about 1,200 input tokens + 600 output tokens. That's roughly $0.012 per generated question. Add in the doubt challenge ($0.004), a hint ($0.005), and the lesson ($0.007), and a full 10-question session where someone struggles hits roughly $0.24.
Sounds tiny until you do the math:
- 100 users a day = $18/day = $540/month = $6,570/year
- 500 users a day (one Reddit post) = $90/day = $2,700/month
- 1,000 users a day (viral) = $180/day = $5,400/month
And that's with caching. Without it, you're looking at 80% more.
Plus ongoing maintenance I can't commit to. Grad school doesn't leave weekends free.
Option 2: Keep It as a Claude Artifact
The upside: Users can remix it. Change the topic. Add more questions. The artifact becomes a template, not just a product.
The downside: Every update requires republishing with a new link. The old version stays frozen. No auto-updates.
The bigger question: How long will Claude keep artifacts around? The answer is honest. We don't know. But if artifact storage stays free and doesn't eat resources, why not?
I chose the artifact route. The tradeoff felt right: lose on convenience and guaranteed uptime, gain on flexibility and avoiding overhead I didn't want.
Result: Go Mastery lives at a single artifact link that I can update anytime. Users can fork it, adapt it, and make it their own. No billing surprises. No deployment pipelines.
Caching Generated Content Across Users
The original versions generated questions on every page load. Wasteful. Inconsistent.
I implemented a caching layer using shared storage in the Claude artifact. Questions get generated once, cached, and reused across every user who loads the app.
This means:
- First user sees the questions being generated
- Every user after that gets instant questions (the same set)
- All users get a consistent experience
- No API waste, no repeated generation
The storage is shared, so everyone benefits from the initial generation cost. Not perfect, since updates require republishing. But elegant and zero-cost.
How Claude Artifact Storage Works
Persistent storage for artifacts is available on Pro, Max, Team, and Enterprise plans. A few key constraints to know:
- 20 MB storage limit per artifact - plenty for cached questions
- Text-only data - no images, files, or binary data
- Personal vs. shared - I use shared storage so the first user's generated questions benefit everyone
- Only works when published - storage operations fail during development, which is why testing the caching layer requires publishing first
- Permanent deletion - if you unpublish an artifact, all storage data is deleted
This is why the artifact model works so well for Go Mastery. One person generates the questions. Everyone else gets instant access. No API calls after the first load.
The Implementation
When the second user lands on the app, they won't trigger any new generation. The app checks for gomastery:seed:v2 in shared storage. First user already set it, so seeding gets skipped entirely. Second user goes straight to the setup screen with all cached questions ready.
Every time any user generates a question (either through seeding or by choosing "Generate" source), this runs:
await window.storage.set(CACHE_KEY_PREFIX + level + ":" + topic, JSON.stringify(questions), true)
// ^^^
// shared: true
That true flag writes to shared storage, visible to all users. User A generates a question on "Goroutines" and it gets saved. User B opens the app minutes later and sees that question available under "Cached" immediately.
The virtuous cycle builds over time:
- Seed (1 question per topic) goes into shared cache
- Users pick "Generate" and add more questions
- Next user lands and finds more cached questions available
- Cache grows organically. Fewer AI calls needed over time.
The cache is capped at (MAX_CACHED = 5) 5 per topic. Once a topic hits 5 questions in shared storage, new generated questions push out the oldest ones. The pool stays fresh but bounded.
Why This Matters
The Go Mastery story is really about three design principles:
Structure learning around struggle. The Socratic method isn't new. But making it mandatory in a quiz is. You can't shortcut understanding if the options all sound plausible and you only get one wrong answer to find.
Understand your constraints. Deploying felt like the right move until I did the math. Once I looked at the actual costs from Anthropic's pricing, artifacts weren't second-class citizens. They were the better tradeoff for this stage.
Optimize for impact, not polish. A freely remixable artifact reaches more learners than a locked-down app that costs me time and money to maintain.
Note on pricing: The cost estimates above use Anthropic's Sonnet 4.5 pricing as of early 2026. If you're building something similar, always verify current rates at anthropic.com/pricing before deciding whether to deploy. Model pricing changes, and that changes the entire calculation.
The best app is sometimes the one you don't build.
What's Next?
If this resonated with you, I'm working on deeper dives into the decisions behind Go Mastery. Drop a comment below if there's something specific you want to explore:
- How I structured the question generation and caching system in the artifact
- The Go concepts that tripped me up most (and how the quiz is designed around them)
- Why the Socratic method actually works for learning (with neuroscience backing)
- Building Claude artifacts that scale without infrastructure
- The full cost comparison: artifact vs. deployed app vs. BYOK model
Or follow me if you're interested in Go learning resources, full-stack development, or shipping projects without overthinking them.
Top comments (0)