DEV Community

jg-noncelogic
jg-noncelogic

Posted on • Originally published at streetseekr.com

Show HN: StreetSeekr a Street View scavenger hunt

What StreetSeekr Gets Right: A tidy Street View scavenger hunt you can ship fast

Small scope, immediate feedback, and a product that teaches you which edges actually matter

StreetSeekr is a simple daily game: spot a target anywhere on Earth using Street View. It looks like another location game at first glance, but the product choices underneath are instructive for builders who want to ship something playable fast.

Thesis: micro-products win when constraints are deliberate. StreetSeekr constrains interactions, privacy exposure, and repeatability in ways that make a fun, repeatable loop. If you build tools or experiences for attention-heavy audiences (agencies, content creators, indie products), study the trade-offs here—then steal the parts that map to your constraints.

What StreetSeekr actually is (and why that matters)

StreetSeekr runs a daily scavenger hunt on top of Google Street View: a target appears somewhere on Earth and players navigate in Street View to find it. It's not Geoguessr style where you guess coordinates; it's a directed search with a single goal. That narrow scope achieves a few useful things:

  • Lower cognitive load. One target, one goal. Players aren't paralyzed by infinite possibilities.
  • Easier instrumentation. Track attempts, time-to-solve, and common failure modes without building complex telemetry across freeform guesses.
  • Smaller moderation surface. You host fewer user-generated markers or uploads, so you can audit less.

These trade-offs are product-level constraints, not technical limitations. Building less lets you iterate on quality—how the target is selected, the hint cadence, the scoring curve—rather than endless feature breadth. For anyone shipping a daily experience, that's the primary lesson: pick a single interaction you can tune and measure.

How I would build it in a weekend (practical stack and cost trade-offs)

If you want a playable prototype quickly, pick components that avoid long integrations.

  • Frontend: static site with the Google Maps JavaScript API or Street View embedding. No heavy SPA frameworks needed—progressive enhancement wins.
  • Backend: serverless function to publish the daily target, validate solves, and increment leaderboards. Use PostgreSQL or SQLite for a tiny dataset.
  • Caching: pre-rendered thumbnails and cached Street View image IDs. Reuse images between close targets to cut API calls.

Trade-offs to plan for:

  1. API cost vs. UX. Street View requests have a price and licensing terms. Make puzzles share thumbnails and URLs to limit repeat requests.
  2. Solving logic complexity. Use a distance threshold from the target coordinates for automatic validation; add a human approval flow for edge cases.
  3. Latency. Serverless cold starts are fine for low traffic; if retention grows, add a tiny cache layer.

Small pseudo-check (distance-based validation):

// Haversine to check if guessed lat/lng is within X meters of target
function metersBetween(a, b){ /* ... */ }
if (metersBetween(guess, target) < 50) markSolved();
Enter fullscreen mode Exit fullscreen mode

That gets you a working product fast and keeps operational costs predictable.

Moderation, privacy, and legal trade-offs you can’t ignore

Street View shows real places, sometimes people, and occasionally sensitive locations. That creates three practical responsibilities:

  • Respect the provider terms. Google’s Maps and Street View APIs have display and caching rules. Caching thumbnails is useful, but follow licensing limits.
  • Handle sensitive content. Even if you don't accept uploads, automated selection can pick locations with faces or private property. Add a review queue for puzzles flagged by heuristics (face density, private-tagged POIs) and make review mandatory before publishing.
  • Be transparent about data. Tell players what you record: time-to-solve, approximate location guesses, IPs if needed for moderation.

Operational pattern that scales: automation first, human approval second. Run lightweight classifiers to filter obvious issues. Hold borderline puzzles for a one-click review step. This mirrors what works in compliance-heavy products: reduce human load without removing human judgment.

Distribution, retention, and the metrics that matter

A playable daily product is only as good as the loop that brings players back and invites others in. StreetSeekr's simple mechanics suggest a focused growth playbook:

  • Retention metric: day-1 and day-7 return rate after a puzzle. A tight, repeatable challenge should get measurable returns; aim for >20% D1 initially.
  • Viral mechanics: shareable solve screenshots, canonical links to a solved puzzle, and short embed snippets for social feeds. Make sharing frictionless—one click from the solution screen.
  • Monetization without ruin: if you plan to monetize, keep the core game free. Consider cosmetic leaderboards or private daily leagues for paying teams (agencies could run branded puzzles for clients).

Instrument early: track time-to-solve distribution, median attempts per user, and which puzzles spike drop-offs. Those signals tell you where to tune difficulty, add hints, or rework the selector algorithm.

StreetSeekr is a reminder that interesting products don't need to be huge systems. Pick a single, repeatable interaction, make it cheap to operate, and design the human checks up front. If you build daily moments—whether for a game, a content product, or a client-facing tool—this is how you prioritize work: tune the interaction, automate obvious safety checks, and measure the loop that brings people back.

If you want a concrete next step: implement a distance-based validator, add a one-click moderation hold, and push a share card. You'll learn enough from real players to justify anything more complicated.

References

Top comments (0)