DEV Community

Cover image for I Live in a Van and Vibe-Coded a Global News Aggregator With Claude for $150/mo
Kopaev
Kopaev

Posted on

I Live in a Van and Vibe-Coded a Global News Aggregator With Claude for $150/mo

I live in a Fiat Ducato campervan with my wife and a cat named Zhuzhu. Three years ago I started a WordPress calendar of van life events — festivals, meetups, expos. Everything manual.

Every morning the same routine: open dozens of tabs looking for events, end up finding news instead. A city banned overnight parking. A new campsite opened in Portugal. Spain tightened its camping rules. Read it, close the tab — gone.

At some point I realized: the news was the product. The event calendar was just a side feature. That's how OpenVan.camp was born.

The entire project was vibe-coded with Claude and ChatGPT. I'm not a developer.

What It Does

  • Parses 200+ sources from 40+ countries
  • Clusters duplicates into stories using AI embeddings
  • Translates everything into 7 languages automatically
  • Tracks 650+ events worldwide
  • Auto-publishes to Telegram, WhatsApp, VK, Facebook, Threads
  • Built-in tools: visa calculator, fuel prices, currency converter

2.5 months in. 5,800 stories. Full autopilot.

OpenVan.camp news feed showing van life stories in multiple languages

Why I Built All of This

Here's the thing: every tool on the site was built for myself first. The 90/180-day Schengen calculator — because I was counting days on paper and getting it wrong. Fuel prices across Europe — because I googled before every gas station. Currency converter — because Turkey has one price, Serbia another, and my head can't keep up.

When you build for yourself, you obsess over every detail. Not because "users will appreciate it" — because you use it every day and a crooked button drives you nuts. Turns out other people needed this stuff too.

What It Costs

What Cost/mo
Claude Pro (Anthropic) $100
ChatGPT Plus $20
DeepSeek V3 API (via OpenRouter) ~$10
Jina AI (embeddings) ~$5
Server ~$15
Domain ~$1
Total ~$150/mo

Claude and ChatGPT are my development tools. That's where I vibe-code: describe the task, iterate, debug. Claude Code via CLI works directly on the server — edits files, runs commands, reads logs.

DeepSeek and Jina AI are the site's internals. They run in production and do all the heavy lifting: scoring article relevance, generating tags, translating, clustering. Workhorses processing hundreds of articles a day.

Europe Schengen visa calculator showing remaining days

The Stack

Laravel 12 + Livewire 3 + Tailwind CSS v4
PostgreSQL + Redis
Filament v4 (admin panel)
LiteLLM (LLM proxy/router)
Horizon (queues)
Puppeteer (OG image generation)
Enter fullscreen mode Exit fullscreen mode

One server, no Kubernetes. A monolith.

How an Article Becomes a Story

RSS / Bing News / Google News
  │
  ├─ 1. Parse & extract
  │
  ├─ 2. Deduplicate (URL + title similarity)
  │
  ├─ 3. Relevance scoring (DeepSeek V3)
  │     └─ "Is this about van life?" → 0-100
  │     └─ ~$0.001 per article
  │
  ├─ 4. Enrichment (DeepSeek V3)
  │     └─ Tags, category, country, summary
  │
  ├─ 5. Semantic clustering (Jina AI)
  │     └─ 768-dim vector → find nearest cluster
  │     └─ "15 articles about Spain camping ban" → 1 story
  │
  ├─ 6. Translation into 7 languages (DeepSeek V3)
  │     └─ ~$0.005 per article for all languages
  │
  ├─ 7. Moderation (auto or manual via Filament)
  │
  ├─ 8. Classification into story or event
  │     └─ About a specific event → creates an Event
  │     └─ General news → attaches to a Story cluster
  │
  └─ 9. Publishing
        ├─ Website (7 locales)
        ├─ Telegram, WhatsApp, VK, Facebook, Threads
        └─ Search engine pings (IndexNow, Bing API, WebSub)
Enter fullscreen mode Exit fullscreen mode

From RSS appearance to published story: 15-30 minutes. No human involved.

Filament admin dashboard with moderation queue and statistics

How LLM Calls Work

Every task is defined in config — model, temperature, prompt:

// config/llm.php
'tasks' => [
    'relevance' => [
        'model' => 'deepseek-chat',
        'temperature' => 0.1,
        'max_tokens' => 100,
        'system' => 'You are a content classifier...',
        'user' => 'Rate relevance 0-100: {title}\n{summary}',
    ],
    'translate' => [
        'model' => 'deepseek-chat',
        'temperature' => 0.3,
        'system' => 'Translate to {target_language}...',
    ],
]

// Call:
$result = app(LlmManager::class)->runTask(
    task: 'relevance',
    variables: ['title' => $article->title, 'summary' => $article->summary]
);
Enter fullscreen mode Exit fullscreen mode

Everything goes through a LiteLLM proxy. If DeepSeek goes down tomorrow, I switch to any model on OpenRouter with one line in the config. No code changes.

Clustering — the Trickiest Part

When 15 outlets publish the same story, you don't want 15 cards. You want one — with 15 sources.

1. Article arrives
2. Jina AI generates embedding (768-dim vector)
3. Cosine similarity against existing stories
4. Similarity > 0.82 → attach to story
5. No match → new story
6. Best article becomes the "lead"
Enter fullscreen mode Exit fullscreen mode

200+ sources, 500-1,000 incoming articles per day, 30-50 unique stories out.

Search Engine Indexing

New domain — Google doesn't know you exist. Had to set up multiple channels:

Story published
  │
  ├─► IndexNow → Bing, Yandex, Seznam
  ├─► Bing URL Submission API → priority crawl
  ├─► WebSub → Google (RSS hub ping)
  ├─► RSS feeds (15-min cache)
  ├─► Sitemaps with hreflang for 7 locales
  └─► llms.txt → ChatGPT Search, Perplexity
Enter fullscreen mode Exit fullscreen mode

Vibe Coding — How It Actually Works

It's not "tell AI what to do and go for a walk."

It's when at 3am you're figuring out why Redis session locks are hanging every Livewire request in the admin panel. Claude wrote the code — and you're debugging why 97 out of 100 PostgreSQL connections are occupied and pages take 15 seconds to load.

It's when you learn that artisan must run as www-data, because you ran it as root once and the whole site went down with a 500 error. AI won't warn you about that in advance — you learn through pain.

Claude Code is a fantastic tool. It sees your project files, runs commands, edits code, reads logs. My CLAUDE.md is 500 lines of rules it follows. But architectural decisions are still mine. AI won't tell you that admin panel widgets are polling the server every 2 seconds and killing everything through session locking. You find that yourself.

Numbers

  • 5,800 stories
  • 650+ events
  • 7 languages
  • 200+ sources
  • 500-1,000 articles processed daily
  • 0 full-time developers

What I'd Do Differently

  1. Learn Claude Code CLI from day one. I resisted it for weeks — seemed intimidating, terminal, commands. Kept vibe-coding through the chat interface instead. When I finally tried Claude Code, everything changed. It reads your files, runs commands, edits code, checks logs — all in one conversation. The productivity difference is night and day. I lost weeks being stubborn about this.

  2. Pick the right stack before writing a single line. I started vibe-coding without understanding what Laravel, Livewire, or Tailwind even were. Just told AI "build me this" and it built... something. On a random mix of tools. Eventually I had to scrap everything and rebuild from scratch on Laravel. If I'd spent two days researching stacks first, I'd have saved two months of rewriting.

  3. Monitoring from day one. Found out about 97/100 occupied PostgreSQL connections when the site was already down.

  4. Disable Livewire polling immediately. Session locking + polling = everything hangs.

  5. Use DeepSeek from the start instead of GPT-4o-mini. 10x cheaper, same quality for structured tasks.

  6. Subscribe to Claude Max immediately and stop fighting with Google AI Studio. I tried to save money by using Google's free tier for coding. Constant context limits, lost conversations, mediocre code suggestions. Switched to Claude Max and the quality jump was absurd. The time I wasted on workarounds cost more than the subscription.

Links

Ask anything about the stack, the pipeline, or living in a van while building software. I'll answer in comments.

Top comments (0)